From 5a6246fc7ef7a97fa500444350b3a1a3402de3aa Mon Sep 17 00:00:00 2001 From: Dylan Cashman Date: Mon, 26 Aug 2024 00:55:40 -0400 Subject: [PATCH] Add workshop papers and data for TVCG papers --- program/event_a-ldav.html | 2 +- program/event_s-vds.html | 2 +- program/event_w-beliv.html | 2 +- program/event_w-eduvis.html | 2 +- program/event_w-energyvis.html | 2 +- program/event_w-future.html | 2 +- program/event_w-nlviz.html | 2 +- program/event_w-storygenai.html | 2 +- program/event_w-topoinvis.html | 2 +- program/event_w-uncertainty.html | 2 +- program/event_w-vis4climate.html | 2 +- program/paper_a-ldav-1002.html | 127 +++++++++++++++++ program/paper_a-ldav-1003.html | 127 +++++++++++++++++ program/paper_a-ldav-1006.html | 127 +++++++++++++++++ program/paper_a-ldav-1011.html | 127 +++++++++++++++++ program/paper_a-ldav-1016.html | 127 +++++++++++++++++ program/paper_a-ldav-1018.html | 127 +++++++++++++++++ program/paper_s-vds-1000.html | 127 +++++++++++++++++ program/paper_s-vds-1002.html | 127 +++++++++++++++++ program/paper_s-vds-1007.html | 127 +++++++++++++++++ program/paper_s-vds-1013.html | 127 +++++++++++++++++ program/paper_s-vds-1021.html | 127 +++++++++++++++++ program/paper_s-vds-1029.html | 127 +++++++++++++++++ program/paper_v-cga-9866547.html | 2 +- program/paper_v-full-1202.html | 2 +- program/paper_v-full-1770.html | 2 +- program/paper_v-short-1186.html | 2 +- program/paper_v-tvcg-20223193756.html | 2 +- program/paper_v-tvcg-20223229017.html | 2 +- program/paper_v-tvcg-20233261320.html | 2 +- program/paper_v-tvcg-20233275925.html | 2 +- program/paper_v-tvcg-20233287585.html | 2 +- program/paper_v-tvcg-20233289292.html | 2 +- program/paper_v-tvcg-20233299602.html | 2 +- program/paper_v-tvcg-20233302308.html | 2 +- program/paper_v-tvcg-20233306356.html | 2 +- program/paper_v-tvcg-20233310019.html | 2 +- program/paper_v-tvcg-20233316469.html | 2 +- program/paper_v-tvcg-20233322372.html | 2 +- program/paper_v-tvcg-20233322898.html | 2 +- program/paper_v-tvcg-20233323150.html | 2 +- program/paper_v-tvcg-20233324851.html | 2 +- program/paper_v-tvcg-20233326698.html | 2 +- program/paper_v-tvcg-20233330262.html | 2 +- program/paper_v-tvcg-20233332511.html | 2 +- program/paper_v-tvcg-20233332999.html | 2 +- program/paper_v-tvcg-20233333356.html | 2 +- program/paper_v-tvcg-20233334513.html | 2 +- program/paper_v-tvcg-20233334755.html | 2 +- program/paper_v-tvcg-20233336588.html | 2 +- program/paper_v-tvcg-20233337173.html | 2 +- program/paper_v-tvcg-20233337396.html | 2 +- program/paper_v-tvcg-20233337642.html | 2 +- program/paper_v-tvcg-20233338451.html | 2 +- program/paper_v-tvcg-20233340770.html | 2 +- program/paper_v-tvcg-20233341990.html | 2 +- program/paper_v-tvcg-20233345340.html | 2 +- program/paper_v-tvcg-20233345373.html | 2 +- program/paper_v-tvcg-20233346640.html | 2 +- program/paper_v-tvcg-20233346641.html | 2 +- program/paper_v-tvcg-20233346713.html | 2 +- program/paper_v-tvcg-20243350076.html | 2 +- program/paper_v-tvcg-20243354561.html | 2 +- program/paper_v-tvcg-20243355884.html | 2 +- program/paper_v-tvcg-20243356566.html | 2 +- program/paper_v-tvcg-20243358919.html | 2 +- program/paper_v-tvcg-20243364388.html | 2 +- program/paper_v-tvcg-20243364841.html | 2 +- program/paper_v-tvcg-20243365089.html | 2 +- program/paper_v-tvcg-20243368060.html | 2 +- program/paper_v-tvcg-20243368621.html | 2 +- program/paper_v-tvcg-20243372104.html | 2 +- program/paper_v-tvcg-20243372620.html | 2 +- program/paper_v-tvcg-20243374571.html | 2 +- program/paper_v-tvcg-20243376406.html | 2 +- program/paper_v-tvcg-20243381453.html | 2 +- program/paper_v-tvcg-20243382607.html | 2 +- program/paper_v-tvcg-20243382760.html | 2 +- program/paper_v-tvcg-20243383089.html | 2 +- program/paper_v-tvcg-20243385118.html | 2 +- program/paper_v-tvcg-20243390219.html | 2 +- program/paper_v-tvcg-20243392476.html | 2 +- program/paper_v-tvcg-20243392587.html | 2 +- program/paper_v-tvcg-20243394745.html | 2 +- program/paper_v-tvcg-20243397004.html | 2 +- program/paper_v-tvcg-20243402610.html | 2 +- program/paper_v-tvcg-20243402834.html | 2 +- program/paper_v-tvcg-20243406387.html | 2 +- program/paper_v-tvcg-20243408255.html | 2 +- program/paper_v-tvcg-20243411575.html | 2 +- program/paper_v-tvcg-20243411786.html | 2 +- program/paper_v-tvcg-20243413195.html | 2 +- program/paper_w-beliv-1001.html | 127 +++++++++++++++++ program/paper_w-beliv-1004.html | 127 +++++++++++++++++ program/paper_w-beliv-1005.html | 127 +++++++++++++++++ program/paper_w-beliv-1007.html | 127 +++++++++++++++++ program/paper_w-beliv-1008.html | 127 +++++++++++++++++ program/paper_w-beliv-1009.html | 127 +++++++++++++++++ program/paper_w-beliv-1015.html | 127 +++++++++++++++++ program/paper_w-beliv-1016.html | 127 +++++++++++++++++ program/paper_w-beliv-1018.html | 127 +++++++++++++++++ program/paper_w-beliv-1020.html | 127 +++++++++++++++++ program/paper_w-beliv-1021.html | 127 +++++++++++++++++ program/paper_w-beliv-1026.html | 127 +++++++++++++++++ program/paper_w-beliv-1027.html | 127 +++++++++++++++++ program/paper_w-beliv-1033.html | 127 +++++++++++++++++ program/paper_w-beliv-1034.html | 127 +++++++++++++++++ program/paper_w-beliv-1035.html | 127 +++++++++++++++++ program/paper_w-beliv-1037.html | 127 +++++++++++++++++ program/paper_w-eduvis-1007.html | 127 +++++++++++++++++ program/paper_w-eduvis-1008.html | 127 +++++++++++++++++ program/paper_w-eduvis-1010.html | 127 +++++++++++++++++ program/paper_w-eduvis-1013.html | 127 +++++++++++++++++ program/paper_w-eduvis-1015.html | 127 +++++++++++++++++ program/paper_w-eduvis-1017.html | 127 +++++++++++++++++ program/paper_w-eduvis-1018.html | 127 +++++++++++++++++ program/paper_w-eduvis-1019.html | 127 +++++++++++++++++ program/paper_w-eduvis-1020.html | 127 +++++++++++++++++ program/paper_w-eduvis-1026.html | 127 +++++++++++++++++ program/paper_w-eduvis-1027.html | 127 +++++++++++++++++ program/paper_w-eduvis-1028.html | 127 +++++++++++++++++ program/paper_w-eduvis-1029.html | 127 +++++++++++++++++ program/paper_w-eduvis-1030.html | 127 +++++++++++++++++ program/paper_w-eduvis-1031.html | 127 +++++++++++++++++ program/paper_w-energyvis-1762.html | 127 +++++++++++++++++ program/paper_w-energyvis-2646.html | 127 +++++++++++++++++ program/paper_w-energyvis-2743.html | 127 +++++++++++++++++ program/paper_w-energyvis-2845.html | 127 +++++++++++++++++ program/paper_w-energyvis-3496.html | 127 +++++++++++++++++ program/paper_w-energyvis-4332.html | 127 +++++++++++++++++ program/paper_w-energyvis-6102.html | 127 +++++++++++++++++ program/paper_w-energyvis-9750.html | 127 +++++++++++++++++ program/paper_w-future-1007.html | 127 +++++++++++++++++ program/paper_w-future-1008.html | 127 +++++++++++++++++ program/paper_w-future-1011.html | 127 +++++++++++++++++ program/paper_w-future-1012.html | 127 +++++++++++++++++ program/paper_w-future-1013.html | 127 +++++++++++++++++ program/paper_w-nlviz-1004.html | 127 +++++++++++++++++ program/paper_w-nlviz-1007.html | 127 +++++++++++++++++ program/paper_w-nlviz-1008.html | 127 +++++++++++++++++ program/paper_w-nlviz-1009.html | 127 +++++++++++++++++ program/paper_w-nlviz-1010.html | 127 +++++++++++++++++ program/paper_w-nlviz-1011.html | 127 +++++++++++++++++ program/paper_w-nlviz-1016.html | 127 +++++++++++++++++ program/paper_w-nlviz-1019.html | 127 +++++++++++++++++ program/paper_w-nlviz-1020.html | 127 +++++++++++++++++ program/paper_w-nlviz-1021.html | 127 +++++++++++++++++ program/paper_w-nlviz-1022.html | 127 +++++++++++++++++ program/paper_w-storygenai-5237.html | 127 +++++++++++++++++ program/paper_w-storygenai-6168.html | 127 +++++++++++++++++ program/paper_w-storygenai-7043.html | 127 +++++++++++++++++ program/paper_w-storygenai-7072.html | 127 +++++++++++++++++ program/paper_w-topoinvis-1027.html | 127 +++++++++++++++++ program/paper_w-topoinvis-1031.html | 127 +++++++++++++++++ program/paper_w-topoinvis-1033.html | 127 +++++++++++++++++ program/paper_w-topoinvis-1034.html | 127 +++++++++++++++++ program/paper_w-topoinvis-1038.html | 127 +++++++++++++++++ program/paper_w-topoinvis-1041.html | 127 +++++++++++++++++ program/paper_w-uncertainty-1007.html | 127 +++++++++++++++++ program/paper_w-uncertainty-1009.html | 127 +++++++++++++++++ program/paper_w-uncertainty-1010.html | 127 +++++++++++++++++ program/paper_w-uncertainty-1011.html | 127 +++++++++++++++++ program/paper_w-uncertainty-1012.html | 127 +++++++++++++++++ program/paper_w-uncertainty-1013.html | 127 +++++++++++++++++ program/paper_w-uncertainty-1014.html | 127 +++++++++++++++++ program/paper_w-uncertainty-1015.html | 127 +++++++++++++++++ program/paper_w-uncertainty-1016.html | 127 +++++++++++++++++ program/paper_w-uncertainty-1017.html | 127 +++++++++++++++++ program/paper_w-uncertainty-1018.html | 127 +++++++++++++++++ program/paper_w-uncertainty-1019.html | 127 +++++++++++++++++ program/paper_w-vis4climate-1000.html | 127 +++++++++++++++++ program/paper_w-vis4climate-1008.html | 127 +++++++++++++++++ program/paper_w-vis4climate-1011.html | 127 +++++++++++++++++ program/paper_w-vis4climate-1018.html | 127 +++++++++++++++++ program/paper_w-vis4climate-1023.html | 127 +++++++++++++++++ program/paper_w-vis4climate-1024.html | 127 +++++++++++++++++ program/paperlist.html | 2 +- program/papers.json | 2 +- program/serve_paper_list.json | 2 +- program/serve_session_list.json | 2 +- program/session_a-ldav0.html | 187 ++++++++++++++++++++++++++ program/session_full0.html | 2 +- program/session_s-vds0.html | 187 ++++++++++++++++++++++++++ program/session_short0.html | 2 +- program/session_tvcg0.html | 2 +- program/session_w-beliv0.html | 187 ++++++++++++++++++++++++++ program/session_w-eduvis0.html | 187 ++++++++++++++++++++++++++ program/session_w-energyvis0.html | 187 ++++++++++++++++++++++++++ program/session_w-future0.html | 187 ++++++++++++++++++++++++++ program/session_w-nlviz0.html | 187 ++++++++++++++++++++++++++ program/session_w-storygenai0.html | 187 ++++++++++++++++++++++++++ program/session_w-topoinvis0.html | 187 ++++++++++++++++++++++++++ program/session_w-uncertainty0.html | 187 ++++++++++++++++++++++++++ program/session_w-vis4climate0.html | 187 ++++++++++++++++++++++++++ 194 files changed, 14336 insertions(+), 87 deletions(-) create mode 100644 program/paper_a-ldav-1002.html create mode 100644 program/paper_a-ldav-1003.html create mode 100644 program/paper_a-ldav-1006.html create mode 100644 program/paper_a-ldav-1011.html create mode 100644 program/paper_a-ldav-1016.html create mode 100644 program/paper_a-ldav-1018.html create mode 100644 program/paper_s-vds-1000.html create mode 100644 program/paper_s-vds-1002.html create mode 100644 program/paper_s-vds-1007.html create mode 100644 program/paper_s-vds-1013.html create mode 100644 program/paper_s-vds-1021.html create mode 100644 program/paper_s-vds-1029.html create mode 100644 program/paper_w-beliv-1001.html create mode 100644 program/paper_w-beliv-1004.html create mode 100644 program/paper_w-beliv-1005.html create mode 100644 program/paper_w-beliv-1007.html create mode 100644 program/paper_w-beliv-1008.html create mode 100644 program/paper_w-beliv-1009.html create mode 100644 program/paper_w-beliv-1015.html create mode 100644 program/paper_w-beliv-1016.html create mode 100644 program/paper_w-beliv-1018.html create mode 100644 program/paper_w-beliv-1020.html create mode 100644 program/paper_w-beliv-1021.html create mode 100644 program/paper_w-beliv-1026.html create mode 100644 program/paper_w-beliv-1027.html create mode 100644 program/paper_w-beliv-1033.html create mode 100644 program/paper_w-beliv-1034.html create mode 100644 program/paper_w-beliv-1035.html create mode 100644 program/paper_w-beliv-1037.html create mode 100644 program/paper_w-eduvis-1007.html create mode 100644 program/paper_w-eduvis-1008.html create mode 100644 program/paper_w-eduvis-1010.html create mode 100644 program/paper_w-eduvis-1013.html create mode 100644 program/paper_w-eduvis-1015.html create mode 100644 program/paper_w-eduvis-1017.html create mode 100644 program/paper_w-eduvis-1018.html create mode 100644 program/paper_w-eduvis-1019.html create mode 100644 program/paper_w-eduvis-1020.html create mode 100644 program/paper_w-eduvis-1026.html create mode 100644 program/paper_w-eduvis-1027.html create mode 100644 program/paper_w-eduvis-1028.html create mode 100644 program/paper_w-eduvis-1029.html create mode 100644 program/paper_w-eduvis-1030.html create mode 100644 program/paper_w-eduvis-1031.html create mode 100644 program/paper_w-energyvis-1762.html create mode 100644 program/paper_w-energyvis-2646.html create mode 100644 program/paper_w-energyvis-2743.html create mode 100644 program/paper_w-energyvis-2845.html create mode 100644 program/paper_w-energyvis-3496.html create mode 100644 program/paper_w-energyvis-4332.html create mode 100644 program/paper_w-energyvis-6102.html create mode 100644 program/paper_w-energyvis-9750.html create mode 100644 program/paper_w-future-1007.html create mode 100644 program/paper_w-future-1008.html create mode 100644 program/paper_w-future-1011.html create mode 100644 program/paper_w-future-1012.html create mode 100644 program/paper_w-future-1013.html create mode 100644 program/paper_w-nlviz-1004.html create mode 100644 program/paper_w-nlviz-1007.html create mode 100644 program/paper_w-nlviz-1008.html create mode 100644 program/paper_w-nlviz-1009.html create mode 100644 program/paper_w-nlviz-1010.html create mode 100644 program/paper_w-nlviz-1011.html create mode 100644 program/paper_w-nlviz-1016.html create mode 100644 program/paper_w-nlviz-1019.html create mode 100644 program/paper_w-nlviz-1020.html create mode 100644 program/paper_w-nlviz-1021.html create mode 100644 program/paper_w-nlviz-1022.html create mode 100644 program/paper_w-storygenai-5237.html create mode 100644 program/paper_w-storygenai-6168.html create mode 100644 program/paper_w-storygenai-7043.html create mode 100644 program/paper_w-storygenai-7072.html create mode 100644 program/paper_w-topoinvis-1027.html create mode 100644 program/paper_w-topoinvis-1031.html create mode 100644 program/paper_w-topoinvis-1033.html create mode 100644 program/paper_w-topoinvis-1034.html create mode 100644 program/paper_w-topoinvis-1038.html create mode 100644 program/paper_w-topoinvis-1041.html create mode 100644 program/paper_w-uncertainty-1007.html create mode 100644 program/paper_w-uncertainty-1009.html create mode 100644 program/paper_w-uncertainty-1010.html create mode 100644 program/paper_w-uncertainty-1011.html create mode 100644 program/paper_w-uncertainty-1012.html create mode 100644 program/paper_w-uncertainty-1013.html create mode 100644 program/paper_w-uncertainty-1014.html create mode 100644 program/paper_w-uncertainty-1015.html create mode 100644 program/paper_w-uncertainty-1016.html create mode 100644 program/paper_w-uncertainty-1017.html create mode 100644 program/paper_w-uncertainty-1018.html create mode 100644 program/paper_w-uncertainty-1019.html create mode 100644 program/paper_w-vis4climate-1000.html create mode 100644 program/paper_w-vis4climate-1008.html create mode 100644 program/paper_w-vis4climate-1011.html create mode 100644 program/paper_w-vis4climate-1018.html create mode 100644 program/paper_w-vis4climate-1023.html create mode 100644 program/paper_w-vis4climate-1024.html create mode 100644 program/session_a-ldav0.html create mode 100644 program/session_s-vds0.html create mode 100644 program/session_w-beliv0.html create mode 100644 program/session_w-eduvis0.html create mode 100644 program/session_w-energyvis0.html create mode 100644 program/session_w-future0.html create mode 100644 program/session_w-nlviz0.html create mode 100644 program/session_w-storygenai0.html create mode 100644 program/session_w-topoinvis0.html create mode 100644 program/session_w-uncertainty0.html create mode 100644 program/session_w-vis4climate0.html diff --git a/program/event_a-ldav.html b/program/event_a-ldav.html index 986d5e16c..6858c9e76 100644 --- a/program/event_a-ldav.html +++ b/program/event_a-ldav.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization

LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization

[] – []

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization

LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization

Add all of this event's sessions to your calendar.

associated

LDAV

6 presentations in this session. See more »

IEEE VIS 2024 Content: VDS: Visualization in Data Science Symposium

VDS: Visualization in Data Science Symposium

[] – []

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: VDS: Visualization in Data Science Symposium

VDS: Visualization in Data Science Symposium

Add all of this event's sessions to your calendar.

associated

VDS

6 presentations in this session. See more »

IEEE VIS 2024 Content: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

[] – []

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

Add all of this event's sessions to your calendar.

workshop

BELIV

17 presentations in this session. See more »

IEEE VIS 2024 Content: EduVis: Workshop on Visualization Education, Literacy, and Activities

EduVis: Workshop on Visualization Education, Literacy, and Activities

[] – []

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: EduVis: Workshop on Visualization Education, Literacy, and Activities

EduVis: Workshop on Visualization Education, Literacy, and Activities

Add all of this event's sessions to your calendar.

workshop

EduVis

15 presentations in this session. See more »

IEEE VIS 2024 Content: EnergyVis 2024: 4th Workshop on Energy Data Visualization

EnergyVis 2024: 4th Workshop on Energy Data Visualization

[] – []

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: EnergyVis 2024: 4th Workshop on Energy Data Visualization

EnergyVis 2024: 4th Workshop on Energy Data Visualization

Add all of this event's sessions to your calendar.

workshop

EnergyVis

8 presentations in this session. See more »

IEEE VIS 2024 Content: VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

[] – []

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

Add all of this event's sessions to your calendar.

workshop

VISions of the Future

5 presentations in this session. See more »

IEEE VIS 2024 Content: NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

[] – []

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

Add all of this event's sessions to your calendar.

workshop

MLVIZ

11 presentations in this session. See more »

IEEE VIS 2024 Content: Workshop on Data Storytelling in an Era of Generative AI

Workshop on Data Storytelling in an Era of Generative AI

[] – []

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: Workshop on Data Storytelling in an Era of Generative AI

Workshop on Data Storytelling in an Era of Generative AI

Add all of this event's sessions to your calendar.

workshop

Data Story GenAI

4 presentations in this session. See more »

IEEE VIS 2024 Content: TopoInVis: Workshop on Topological Data Analysis and Visualization

TopoInVis: Workshop on Topological Data Analysis and Visualization

[] – []

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: TopoInVis: Workshop on Topological Data Analysis and Visualization

TopoInVis: Workshop on Topological Data Analysis and Visualization

Add all of this event's sessions to your calendar.

workshop

TopoInVis

6 presentations in this session. See more »

IEEE VIS 2024 Content: Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

[] – []

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

Add all of this event's sessions to your calendar.

workshop

Uncertainty Visualization

12 presentations in this session. See more »

IEEE VIS 2024 Content: Visualization for Climate Action and Sustainability

Visualization for Climate Action and Sustainability

[] – []

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: Visualization for Climate Action and Sustainability

Visualization for Climate Action and Sustainability

Add all of this event's sessions to your calendar.

workshop

Vis4Climate

6 presentations in this session. See more »

IEEE VIS 2024 Content: Efficient Analysis and Visualization of High-Resolution Computed Tomography Data for the Exploration of Enclosed Cuneiform Tablets

Efficient Analysis and Visualization of High-Resolution Computed Tomography Data for the Exploration of Enclosed Cuneiform Tablets

Stephan Olbrich - Universität Hamburg, Hamburg, Germany

Andreas Beckert - Universität Hamburg, Hamburg, Germany

Cécile Michel - Centre National de la Recherche Scientifique (CNRS), Nanterre, France

Christian Schroer - Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany. Universität Hamburg, Hamburg, Germany

Samaneh Ehteram - Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany. Universität Hamburg, Hamburg, Germany

Andreas Schropp - Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany

Philipp Paetzold - Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany

Room: To Be Announced

Abstract

Cuneiform is the earliest known system of writing, first developed for the Sumerian language of southern Mesopotamia in the second half of the 4th millennium BC. Cuneiform signs are obtained by impressing a stylus on fresh clay tablets. For certain purposes, e.g. authentication by seal imprint, some cuneiform tablets were enclosed in clay envelopes, which cannot be opened without destroying them. The aim of our interdisciplinary project is the non-invasive study of clay tablets. A portable X-ray micro-CT scanner is developed to acquire density data of such artifacts on a high-resolution, regular 3D grid at collection sites. The resulting volume data is processed through feature-preserving denoising, extraction of high-accuracy surfaces using a manifold dual marching cubes algorithm and extraction of local features by enhanced curvature rendering and ambient occlusion. For the non-invasive study of cuneiform inscriptions, the tablet is virtually separated from its envelope by curvature-based segmentation. The computational- and data-intensive algorithms are optimized for near-real-time offline usage with limited resources at collection sites. To visualize the complexity-reduced and octree-based compressed representation of surfaces, we develop and implement an interactive application. To facilitate the analysis of such clay tablets, we implement shape-based feature extraction algorithms to enhance cuneiform recognition. Our workflow supports innovative 3D display and interaction techniques such as autostereoscopic displays and gesture control.

\ No newline at end of file diff --git a/program/paper_a-ldav-1003.html b/program/paper_a-ldav-1003.html new file mode 100644 index 000000000..9d09d38a4 --- /dev/null +++ b/program/paper_a-ldav-1003.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Out-of-Core Dimensionality Reduction for Large Data via Out-of-Sample Extensions

Out-of-Core Dimensionality Reduction for Large Data via Out-of-Sample Extensions

Luca Marcel Reichmann - Universität Stuttgart, Stuttgart, Germany

David Hägele - University of Stuttgart, Stuttgart, Germany

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Room: To Be Announced

Abstract

Dimensionality reduction (DR) is a well-established approach for the visualization of high-dimensional data sets. While DR methods are often applied to typical DR benchmark data sets in the literature, they might suffer from high runtime complexity and memory requirements, making them unsuitable for large data visualization especially in environments outside of high-performance computing. To perform DR on large data sets, we propose the use of out-of-sample extensions. Such extensions allow inserting new data into existing projections, which we leverage to iteratively project data into a reference projection that consists only of a small manageable subset. This process makes it possible to perform DR out-of-core on large data, which would otherwise not be possible due to memory and runtime limitations. For metric multidimensional scaling (MDS), we contribute an implementation with out-of-sample projection capability since typical software libraries do not support it. We provide an evaluation of the projection quality of five common DR algorithms (MDS, PCA, t-SNE, UMAP, and autoencoders) using quality metrics from the literature and analyze the trade-off between the size of the reference set and projection quality. The runtime behavior of the algorithms is also quantified with respect to reference set size, out-of-sample batch size, and dimensionality of the data sets. Furthermore, we compare the out-of-sample approach to other recently introduced DR methods, such as PaCMAP and TriMAP, which claim to handle larger data sets than traditional approaches. To showcase the usefulness of DR on this large scale, we contribute a use case where we analyze ensembles of streamlines amounting to one billion projected instances.

\ No newline at end of file diff --git a/program/paper_a-ldav-1006.html b/program/paper_a-ldav-1006.html new file mode 100644 index 000000000..f218c03e7 --- /dev/null +++ b/program/paper_a-ldav-1006.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Web-based Visualization and Analytics of Petascale data: Equity as a Tide that Lifts All Boats

Web-based Visualization and Analytics of Petascale data: Equity as a Tide that Lifts All Boats

Aashish Panta - University of Utah, Salt Lake City, United States

Xuan Huang - Scientific Computing and Imaging Institute, Salt Lake City, United States

Nina McCurdy - NASA Ames Research Center, Mountain View, United States

David Ellsworth - NASA, mountain View, United States

Amy Gooch - university of Utah, Salt lake city, United States

Giorgio Scorzelli - university of Utah, Salt lake city, United States

Hector Torres - NASA, Pasadena, United States

Patrice Klein - caltech, Pasadena, United States

Gustavo Ovando-Montejo - Utah State University Blanding, Blanding, United States

Valerio Pascucci - University of Utah, Salt Lake City, United States

Room: To Be Announced

Abstract

Scientists generate petabytes of data daily to help uncover environmental trends or behaviors that are hard to predict. For example, understanding climate simulations based on the long-term average of temperature, precipitation, and other environmental variables is essential to predicting and establishing root causes of future undesirable scenarios and assessing possible mitigation strategies. Unfortunately, bottlenecks in petascale workflows restrict scientists' ability to analyze and visualize the necessary information due to requirements for extensive computational resources, obstacles in data accessibility, and inefficient analysis algorithms. This paper presents an approach to managing, visualizing, and analyzing petabytes of data within a browser on equipment ranging from the top NASA supercomputer to commodity hardware like a laptop. Our approach is based on a novel data fabric abstraction layer that allows querying scientific information in a form that is user-friendly while hiding the complexities of dealing with file systems or cloud services. We also optimize network utilization while streaming from petascale repositories through state-of-the-art progressive compression algorithms. Based on this abstraction, we provide customizable dashboards that can be accessed from any device with an internet connection, offering straightforward access to vast amounts of data typically not available to those without access to uniquely expensive hardware resources. Our dashboards provide and improve the ability to access and, more importantly, use massive data for a wide range of users, from top scientists with access to leadership-class computing environments to undergraduate students of disadvantaged backgrounds from minority-serving institutions. We focus on NASA's use of petascale climate datasets as an example of particular societal impact and, therefore, a case where achieving equity in science participation is critical. In particular, we validate our approach by improving the ability of climate scientist to explore their data even on the top NASA supercomputer, introducing the ability to study their data in a fully interactive environment instead of being limited to using pre-choreographed videos that can take days to generate each. We also successfully introduced the same dashboards and simplified training material in an undergraduate class on Geospatial Analysis in a minority-serving campus (Utah State Banding) with 69% of the Native American students and 86% being low-income. The same dashboards are also released in simplified form to the general public, providing an unparalleled democratization for the access and use of climate data that can be extended to most scientific domains.

\ No newline at end of file diff --git a/program/paper_a-ldav-1011.html b/program/paper_a-ldav-1011.html new file mode 100644 index 000000000..75be8fcdb --- /dev/null +++ b/program/paper_a-ldav-1011.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Distributed Path Compression for Piecewise Linear Morse-Smale Segmentations and Connected Components

Distributed Path Compression for Piecewise Linear Morse-Smale Segmentations and Connected Components

Michael Will - RPTU Kaiserslautern-Landau, Kaiserslautern, Germany

Jonas Lukasczyk - RPTU Kaiserslautern-Landau, Kaiserslautern, Germany

Julien Tierny - CNRS, Paris, France. Sorbonne Université, Paris, France

Christoph Garth - RPTU Kaiserslautern-Landau, Kaiserslautern, Germany

Room: To Be Announced

Abstract

This paper describes the adaptation of a well-scaling parallel algorithm for computing Morse-Smale segmentations based on path compression to a distributed computational setting. Additionally, we extend the algorithm to efficiently compute connected components in distributed structured and unstructured grids, based either on the connectivity of the underlying mesh or a feature mask. Our implementation is seamlessly integrated with the distributed extension of the Topology ToolKit (TTK), ensuring robust performance and scalability. To demonstrate the practicality and efficiency of our algorithms, we conducted a series of scaling experiments on large-scale datasets, with sizes of up to 4096^3 vertices on up to 64 nodes and 768 cores.

\ No newline at end of file diff --git a/program/paper_a-ldav-1016.html b/program/paper_a-ldav-1016.html new file mode 100644 index 000000000..1de141034 --- /dev/null +++ b/program/paper_a-ldav-1016.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Standardized Data-Parallel Rendering Using ANARI

Standardized Data-Parallel Rendering Using ANARI

Ingo Wald - NVIDIA, Salt Lake City, United States

Stefan Zellmann - University of Cologne, Cologne, Germany

Jefferson Amstutz - NVIDIA, Austin, United States

Qi Wu - University of California, Davis, Davis, United States

Kevin Shawn Griffin - NVIDIA, Santa Clara, United States

Milan Jaroš - VSB - Technical University of Ostrava, Ostrava, Czech Republic

Stefan Wesner - University of Cologne, Cologne, Germany

Room: To Be Announced

Abstract

We propose and discuss a paradigm that allows for expressing data- parallel rendering with the classically non-parallel ANARI API. We propose this as a new standard for data-parallel rendering, describe two different implementations of this paradigm, and use multiple sample integrations into existing applications to show how easy it is to adopt, and what can be gained from doing so.

\ No newline at end of file diff --git a/program/paper_a-ldav-1018.html b/program/paper_a-ldav-1018.html new file mode 100644 index 000000000..065ad3f81 --- /dev/null +++ b/program/paper_a-ldav-1018.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Adaptive Multi-Resolution Encoding for Interactive Large-Scale Volume Visualization through Functional Approximation

Adaptive Multi-Resolution Encoding for Interactive Large-Scale Volume Visualization through Functional Approximation

Jianxin Sun - University of Nebraska-Lincoln, Lincoln, United States

David Lenz - Argonne National Laboratory, Lemont, United States

Hongfeng Yu - University of Nebraska-Lincoln, Lincoln, United States

Tom Peterka - Argonne National Laboratory, Lemont, United States

Room: To Be Announced

Abstract

Functional approximation as a high-order continuous representation provides a more accurate value and gradient query compared to the traditional discrete volume representation. Volume visualization directly rendered from functional approximation generates high-quality rendering results without high-order artifacts caused by trilinear interpolations. However, querying an encoded functional approximation is computationally expensive, especially when the input dataset is large, making functional approximation impractical for interactive visualization. In this paper, we proposed a novel functional approximation multi-resolution representation, Adaptive-FAM, which is lightweight and fast to query. We also design a GPU-accelerated out-of-core multi-resolution volume visualization framework that directly utilizes the Adaptive-FAM representation to generate high-quality rendering with interactive responsiveness. Our method can not only dramatically decrease the caching time, one of the main contributors to input latency, but also effectively improve the cache hit rate through prefetching. Our approach significantly outperforms the traditional function approximation method in terms of input latency while maintaining comparable rendering quality.

\ No newline at end of file diff --git a/program/paper_s-vds-1000.html b/program/paper_s-vds-1000.html new file mode 100644 index 000000000..1db1eab96 --- /dev/null +++ b/program/paper_s-vds-1000.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent

Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent

Yannick Metz - University of Konstanz, Konstanz, Germany

Dennis Ackermann - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Maximilian T. Fischer - University of Konstanz, Konstanz, Germany

Room: To Be Announced

Abstract

Efficient public transport systems are crucial for sustainable urban development as cities face increasing mobility demands. Yet, many public transport networks struggle to meet diverse user needs due to historical development, urban constraints, and financial limitations. Traditionally, planning of transport network structure is often based on limited surveys, expert opinions, or partial usage statistics. This provides an incomplete basis for decision-making. We introduce an data-driven approach to public transport planning and optimization, calculating detailed accessibility measures at the individual housing level. Our visual analytics workflow combines population-group-based simulations with dynamic infrastructure analysis, utilizing a scenario-based model to simulate daily travel patterns of varied demographic groups, including schoolchildren, students, workers, and pensioners. These population groups, each with unique mobility requirements and routines, interact with the transport system under different scenarios traveling to and from Points of Interest (POI), assessed through travel time calculations. Results are visualized through heatmaps, density maps, and network overlays, as well as detailed statistics. Our system allows us to analyze both the underlying data and simulation results on multiple levels of granularity, delivering both broad insights and granular details. Case studies with the city of Konstanz, Germany reveal key areas where public transport does not meet specific needs, confirmed through a formative user study. Due to the high cost of changing legacy networks, our analysis facilitates the identification of strategic enhancements, such as optimized schedules or rerouting, and few targeted stop relocations, highlighting consequential variations in accessibility to pinpointing critical service gaps. Our research advances urban transport analytics by providing policymakers and citizens with a system that delivers both broad insights with granular detail into public transport services for a data-driven quality assessment at housing-level detail.

\ No newline at end of file diff --git a/program/paper_s-vds-1002.html b/program/paper_s-vds-1002.html new file mode 100644 index 000000000..f337df852 --- /dev/null +++ b/program/paper_s-vds-1002.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop

Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop

Jen Rogers - Tufts University, Boston, United States

Mehdi Chakhchoukh - Université Paris-Saclay, CNRS, INRIA, Orsay, France

Marie Anastacio - Leiden Universiteit, Leiden, Netherlands

Rebecca Faust - Tulane University, New Orleans, United States

Cagatay Turkay - University of Warwick, Coventry, United Kingdom

Lars Kotthoff - University of Wyoming, Laramie, United States

Steffen Koch - University of Stuttgart, Stuttgart, Germany

Andreas Kerren - Linköping University, Norrköping, Sweden

Jürgen Bernard - University of Zurich, Zurich, Switzerland

Room: To Be Announced

Abstract

This position paper explores the interplay between automation and human involvement in data science. It synthesizes perspectives from Automated Data Science (AutoDS) and Interactive Data Visualization (VIS), which traditionally represent opposing ends of the human-machine spectrum. While AutoDS aims to enhance efficiency by reducing human tasks, VIS emphasizes the importance of nuanced understanding, innovation, and context provided by human involvement. This paper examines these dichotomies through an online survey and advocates for a balanced approach that harmonizes the efficiency of automation with the irreplaceable insights of human expertise. Ultimately, we address the essential question of not just what we can automate, but what we should automate, seeking strategies that prioritize technological advancement alongside the fundamental need for human oversight.

\ No newline at end of file diff --git a/program/paper_s-vds-1007.html b/program/paper_s-vds-1007.html new file mode 100644 index 000000000..32a013ba2 --- /dev/null +++ b/program/paper_s-vds-1007.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: The Categorical Data Map: A Multidimensional Scaling-Based Approach

The Categorical Data Map: A Multidimensional Scaling-Based Approach

Frederik L. Dennig - University of Konstanz, Konstanz, Germany

Lucas Joos - University of Konstanz, Konstanz, Germany

Patrick Paetzold - University of Konstanz, Konstanz, Germany

Daniela Blumberg - University of Konstanz, Konstanz, Germany

Oliver Deussen - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Maximilian T. Fischer - University of Konstanz, Konstanz, Germany

Room: To Be Announced

Abstract

Categorical data does not have an intrinsic definition of distance or order, and therefore, established visualization techniques for categorical data only allow for a set-based or frequency-based analysis, e.g., through Euler diagrams or Parallel Sets, and do not support a similarity-based analysis. We present a novel dimensionality reduction-based visualization for categorical data, which is based on defining the distance of two data items as the number of varying attributes. Our technique enables users to pre-attentively detect groups of similar data items and observe the properties of the projection, such as attributes strongly influencing the embedding. Our prototype visually encodes data properties in an enhanced scatterplot-like visualization, visualizing attributes in the background to show the distribution of categories. In addition, we propose two graph-based measures to quantify the plot's visual quality, which rank attributes according to their contribution to cluster cohesion. To demonstrate the capabilities of our similarity-based projection method, we compare it to Euler diagrams and Parallel Sets regarding visual scalability and evaluate it quantitatively on seven real-world datasets using a range of common quality measures. Further, we validate the benefits of our approach through an expert study with five data scientists analyzing the Titanic and Mushroom dataset with up to 23 attributes and 8124 category combinations. Our results indicate that our Categorical Data Map offers an effective analysis method for large datasets with a high number of category combinations.

\ No newline at end of file diff --git a/program/paper_s-vds-1013.html b/program/paper_s-vds-1013.html new file mode 100644 index 000000000..93e31412c --- /dev/null +++ b/program/paper_s-vds-1013.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Towards a Visual Perception-Based Analysis of Clustering Quality Metrics

Towards a Visual Perception-Based Analysis of Clustering Quality Metrics

Graziano Blasilli - Sapienza University of Rome, Rome, Italy

Daniel Kerrigan - Northeastern University, Boston, United States

Enrico Bertini - Northeastern University, Boston, United States

Giuseppe Santucci - Sapienza University of Rome, Rome, Italy

Room: To Be Announced

Abstract

Clustering is an essential technique across various domains, such as data science, machine learning, and eXplainable Artificial Intelligence. Information visualization and visual analytics techniques have been proven to effectively support human involvement in the visual exploration of clustered data to enhance the understanding and refinement of cluster assignments. This paper presents an attempt of a deep and exhaustive evaluation of the perceptive aspects of clustering quality metrics, focusing on the Davies-Bouldin Index, Dunn Index, Calinski-Harabasz Index, and Silhouette Score. Our research is centered around two main objectives: a) assessing the human perception of common CVIs in 2D scatterplots and b) exploring the potential of Large Language Models (LLMs), in particular GPT-4o, to emulate the assessed human perception. By discussing the obtained results, highlighting limitations, and areas for further exploration, this paper aims to propose a foundation for future research activities.

\ No newline at end of file diff --git a/program/paper_s-vds-1021.html b/program/paper_s-vds-1021.html new file mode 100644 index 000000000..519bb2bda --- /dev/null +++ b/program/paper_s-vds-1021.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Yongsu Ahn - University of Pittsburgh, Pittsburgh, United States

Quinn K Wolter - School of Computing and Information, University of Pittsburgh, Pittsburgh, United States

Jonilyn Dick - Quest Diagnostics, Pittsburgh, United States

Janet Dick - Quest Diagnostics, Pittsburgh, United States

Yu-Ru Lin - University of Pittsburgh, Pittsburgh, United States

Room: To Be Announced

Abstract

Recommender systems have become integral to digital experiences, shaping user interactions and preferences across various platforms. Despite their widespread use, these systems often suffer from algorithmic biases that can lead to unfair and unsatisfactory user experiences. This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems. By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration, stereotypes, and filter bubbles affect their recommendations. Informed by in-depth user interviews, both general users and researchers can benefit from increased transparency and personalized impact assessments, ultimately fostering a better understanding of algorithmic biases and contributing to more equitable recommendation outcomes. This work provides valuable insights for future research and practical applications in mitigating bias and enhancing fairness in machine learning algorithms.

\ No newline at end of file diff --git a/program/paper_s-vds-1029.html b/program/paper_s-vds-1029.html new file mode 100644 index 000000000..fd2d47d05 --- /dev/null +++ b/program/paper_s-vds-1029.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs

Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs

Raphael Buchmüller - University of Konstanz, Konstanz, Germany

Friederike Körte - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Room: To Be Announced

Abstract

This position paper discusses the profound impact of Large Language Models (LLMs) on semantic change, emphasizing the need for comprehensive monitoring and visualization techniques. Building on established concepts from linguistics, we examine the interdependency between mental and language models, discussing how LLMs influence and are influenced by human cognition and societal context. We introduce three primary theories to conceptualize such influences: Recontextualization, Standardization, and Semantic Dementia, illustrating how LLMs drive, standardize, and potentially degrade language semantics. Our subsequent review categorizes methods for visualizing semantic change into frequency-based, embedding-based, and context-based techniques, being first in assessing their effectiveness in capturing linguistic evolution: Embedding-based methods are highlighted as crucial for a detailed semantic analysis, reflecting both broad trends and specific linguistic changes. We underscore the need for novel visual, interactive tools to monitor and explain semantic changes induced by LLMs, ensuring the preservation of linguistic diversity and mitigating linguistic biases. This work provides essential insights for future research on semantic change visualization and the dynamic nature of language evolution in the times of LLMs.

\ No newline at end of file diff --git a/program/paper_v-cga-9866547.html b/program/paper_v-cga-9866547.html index 087054a8b..2ededc800 100644 --- a/program/paper_v-cga-9866547.html +++ b/program/paper_v-cga-9866547.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification

DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification

Mahsan Nourani -

Chiradeep Roy -

Donald R. Honeycutt -

Eric D. Ragan -

Vibhav Gogate -

Room: To Be Announced

Keywords

Debugging, Analytical Models, Heating Systems, Data Models, Computational Modeling, Activity Recognition, Deep Learning, Multi Label Classification, Visualization Tool, Temporal Classification, Visual Debugging, False Positive, False Negative, Active Components, Deep Learning Models, Types Of Errors, Video Frames, Error Detection, Detection Of Types, Action Recognition, Interactive Visualization, Sequence Of Points, Design Goals, Positive Errors, Critical Outcomes, Error Patterns, Global Panel, False Negative Rate, False Positive Rate, Heatmap, Visual Approach, Truth Labels, True Positive, Confidence Score, Anomaly Detection, Interface Elements

Abstract

In many applications, developed deep-learning models need to be iteratively debugged and refined to improve the model efficiency over time. Debugging some models, such as temporal multilabel classification (TMLC) where each data point can simultaneously belong to multiple classes, can be especially more challenging due to the complexity of the analysis and instances that need to be reviewed. In this article, focusing on video activity recognition as an application of TMLC, we propose DETOXER, an interactive visual debugging system to support finding different error types and scopes through providing multiscope explanations.

IEEE VIS 2024 Content: DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification

DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification

Mahsan Nourani -

Chiradeep Roy -

Donald R. Honeycutt -

Eric D. Ragan -

Vibhav Gogate -

Room: To Be Announced

Keywords

Debugging, Analytical Models, Heating Systems, Data Models, Computational Modeling, Activity Recognition, Deep Learning, Multi Label Classification, Visualization Tool, Temporal Classification, Visual Debugging, False Positive, False Negative, Active Components, Deep Learning Models, Types Of Errors, Video Frames, Error Detection, Detection Of Types, Action Recognition, Interactive Visualization, Sequence Of Points, Design Goals, Positive Errors, Critical Outcomes, Error Patterns, Global Panel, False Negative Rate, False Positive Rate, Heatmap, Visual Approach, Truth Labels, True Positive, Confidence Score, Anomaly Detection, Interface Elements

Abstract

In many applications, developed deep-learning models need to be iteratively debugged and refined to improve the model efficiency over time. Debugging some models, such as temporal multilabel classification (TMLC) where each data point can simultaneously belong to multiple classes, can be especially more challenging due to the complexity of the analysis and instances that need to be reviewed. In this article, focusing on video activity recognition as an application of TMLC, we propose DETOXER, an interactive visual debugging system to support finding different error types and scopes through providing multiscope explanations.

IEEE VIS 2024 Content: Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis

Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis

Mengyu Chen - Emory University, Atlanta, United States

Yijun Liu - Emory University, Atlanta, United States

Emily Wall - Emory University, Atlanta, United States

Room: To Be Announced

Abstract

The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and visual data analysis tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with several variables including personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual's personality traits.

IEEE VIS 2024 Content: Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis

Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis

Mengyu Chen - Emory University, Atlanta, United States

Yijun Liu - Emory University, Atlanta, United States

Room: To Be Announced

Abstract

The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and visual data analysis tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with several variables including personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual's personality traits.

IEEE VIS 2024 Content: A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

Daniel Atzberger - University of Potsdam, Digital Engineering Faculty, Hasso Plattner Institute, Potsdam, Germany

Tim Cech - University of Potsdam, Potsdam, Germany

Willy Scheibel - Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany

Jürgen Döllner - Hasso Plattner Institute

Michael Behrisch - Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany

Tobias Schreck - Utrecht University, Utrecht, Netherlands

Room: To Be Announced

Abstract

The semantic similarity between documents of a text corpus can be visualized using map-like metaphors based on two-dimensional scatterplot layouts. These layouts result from a dimensionality reduction on the document-term matrix or a representation within a latent embedding, including topic models. Thereby, the resulting layout depends on the input data and hyperparameters of the dimensionality reduction and is therefore affected by changes in them. Furthermore, the resulting layout is affected by changes in the input data and hyperparameters of the dimensionality reduction. However, such changes to the layout require additional cognitive efforts from the user. In this work, we present a sensitivity study that analyzes the stability of these layouts concerning (1) changes in the text corpora, (2) changes in the hyperparameter, and (3) randomness in the initialization. Our approach has two stages: data measurement and data analysis. First, we derived layouts for the combination of three text corpora and six text embeddings and a grid-search-inspired hyperparameter selection of the dimensionality reductions. Afterward, we quantified the similarity of the layouts through ten metrics, concerning local and global structures and class separation. Second, we analyzed the resulting 42817 tabular data points in a descriptive statistical analysis. From this, we derived guidelines for informed decisions on the layout algorithm and highlight specific hyperparameter settings. We provide our implementation and results as a Git repository at https://github.com/hpicgs/Topic-Models-and-Dimensionality-Reduction-Sensitivity-Study .

IEEE VIS 2024 Content: A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

Daniel Atzberger - University of Potsdam, Digital Engineering Faculty, Hasso Plattner Institute, Potsdam, Germany

Tim Cech - University of Potsdam, Potsdam, Germany

Willy Scheibel - Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany

Jürgen Döllner - Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany

Michael Behrisch - Utrecht University, Utrecht, Netherlands

Tobias Schreck - Graz University of Technology, Graz, Austria

Room: To Be Announced

Abstract

The semantic similarity between documents of a text corpus can be visualized using map-like metaphors based on two-dimensional scatterplot layouts. These layouts result from a dimensionality reduction on the document-term matrix or a representation within a latent embedding, including topic models. Thereby, the resulting layout depends on the input data and hyperparameters of the dimensionality reduction and is therefore affected by changes in them. Furthermore, the resulting layout is affected by changes in the input data and hyperparameters of the dimensionality reduction. However, such changes to the layout require additional cognitive efforts from the user. In this work, we present a sensitivity study that analyzes the stability of these layouts concerning (1) changes in the text corpora, (2) changes in the hyperparameter, and (3) randomness in the initialization. Our approach has two stages: data measurement and data analysis. First, we derived layouts for the combination of three text corpora and six text embeddings and a grid-search-inspired hyperparameter selection of the dimensionality reductions. Afterward, we quantified the similarity of the layouts through ten metrics, concerning local and global structures and class separation. Second, we analyzed the resulting 42817 tabular data points in a descriptive statistical analysis. From this, we derived guidelines for informed decisions on the layout algorithm and highlight specific hyperparameter settings. We provide our implementation and results as a Git repository at https://github.com/hpicgs/Topic-Models-and-Dimensionality-Reduction-Sensitivity-Study .

IEEE VIS 2024 Content: Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations

Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations

Zhongzheng Xu - Brown University, Providence, United States

Emily Wall - Emory University, Atlanta, United States

Room: To Be Announced

Abstract

Data visualizations help extract insights from datasets, but reaching these insights requires decomposing high level goals into low-level analytic tasks that can be complex due to varying degrees of data literacy and visualization experience. Recent advancements in large language models (LLMs) have shown promise for lowering barriers for users to achieve tasks such as writing code and may likewise facilitate visualization insight. Scalable Vector Graphics (SVG), a text-based image format common in data visualizations, matches well with the text sequence processing of transformer-based LLMs. In this paper, we explore the capability of LLMs to perform 10 low-level visual analytic tasks defined by Amar, Eagan, and Stasko directly on SVG-based visualizations. Using zero-shot prompts, we instruct the models to provide responses or modify the SVG code based on given visualizations. Our findings demonstrate that LLMs can effectively modify existing SVG visualizations for some tasks like Cluster but perform poorly on tasks requiring mathematical operations like Compute Derived Value. We also discovered that LLM performance can vary based on factors such as the number of data points, the presence of value labels, and the chart type. Our findings contribute to gauging the general capabilities of LLMs and highlight the need for further exploration and development to fully harness their potential in supporting visual analytic tasks.

IEEE VIS 2024 Content: Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations

Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations

Zhongzheng Xu - Brown University, Providence, United States

Room: To Be Announced

Abstract

Data visualizations help extract insights from datasets, but reaching these insights requires decomposing high level goals into low-level analytic tasks that can be complex due to varying degrees of data literacy and visualization experience. Recent advancements in large language models (LLMs) have shown promise for lowering barriers for users to achieve tasks such as writing code and may likewise facilitate visualization insight. Scalable Vector Graphics (SVG), a text-based image format common in data visualizations, matches well with the text sequence processing of transformer-based LLMs. In this paper, we explore the capability of LLMs to perform 10 low-level visual analytic tasks defined by Amar, Eagan, and Stasko directly on SVG-based visualizations. Using zero-shot prompts, we instruct the models to provide responses or modify the SVG code based on given visualizations. Our findings demonstrate that LLMs can effectively modify existing SVG visualizations for some tasks like Cluster but perform poorly on tasks requiring mathematical operations like Compute Derived Value. We also discovered that LLM performance can vary based on factors such as the number of data points, the presence of value labels, and the chart type. Our findings contribute to gauging the general capabilities of LLMs and highlight the need for further exploration and development to fully harness their potential in supporting visual analytic tasks.

IEEE VIS 2024 Content: Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Room: To Be Announced

Keywords

Information visualization, animation and motion-related techniques, empirical study, graphical perception, evaluation.

Abstract

Information visualization uses various types of representations to encode data into graphical formats. Prior work on visualization techniques has evaluated the accuracy of perceived numerical data values from visual data encodings such as graphical position, length, orientation, size, and color. Our work aims to extend the research of graphical perception to the use of motion as data encodings for quantitative values. We present two experiments implementing multiple fundamental aspects of motion such as type, speed, and synchronicity that can be used for numerical value encoding as well as comparing motion to static visual encodings in terms of user perception and accuracy. We studied how well users can assess the differences between several types of motion and static visual encodings and present an updated ranking of accuracy for quantitative judgments. Our results indicate that non-synchronized motion can be interpreted more quickly and more accurately than synchronized motion. Moreover, our ranking of static and motion visual representations shows that motion, especially expansion and translational types, has great potential as a data encoding technique for quantitative value. Finally, we discuss the implications for the use of animation and motion for numerical representations in data visualization.

IEEE VIS 2024 Content: Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Shaghayegh Esmaeili -

Samia Kabir -

Anthony M. Colas -

Rhema P. Linder -

Eric D. Ragan -

Room: To Be Announced

Keywords

Information visualization, animation and motion-related techniques, empirical study, graphical perception, evaluation.

Abstract

Information visualization uses various types of representations to encode data into graphical formats. Prior work on visualization techniques has evaluated the accuracy of perceived numerical data values from visual data encodings such as graphical position, length, orientation, size, and color. Our work aims to extend the research of graphical perception to the use of motion as data encodings for quantitative values. We present two experiments implementing multiple fundamental aspects of motion such as type, speed, and synchronicity that can be used for numerical value encoding as well as comparing motion to static visual encodings in terms of user perception and accuracy. We studied how well users can assess the differences between several types of motion and static visual encodings and present an updated ranking of accuracy for quantitative judgments. Our results indicate that non-synchronized motion can be interpreted more quickly and more accurately than synchronized motion. Moreover, our ranking of static and motion visual representations shows that motion, especially expansion and translational types, has great potential as a data encoding technique for quantitative value. Finally, we discuss the implications for the use of animation and motion for numerical representations in data visualization.

IEEE VIS 2024 Content: V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

Room: To Be Announced

Keywords

Human-computer interaction, visualization of scientific 3D data, communication, storytelling, immersive analytics

Abstract

We present V-Mail, a framework of cross-platform applications, interactive techniques, and communication protocols for improved multi-person correspondence about spatial 3D datasets. Inspired by the daily use of e-mail, V-Mail seeks to enable a similar style of rapid, multi-person communication accessible on any device; however, it aims to do this in the new context of spatial 3D communication, where limited access to 3D graphics hardware typically prevents such communication. The approach integrates visual data storytelling with data exploration, spatial annotations, and animated transitions. V-Mail ``data stories'' are exported in a standard video file format to establish a common baseline level of access on (almost) any device. The V-Mail framework also includes a series of complementary client applications and plugins that enable different degrees of story co-authoring and data exploration, adjusted automatically to match the capabilities of various devices. A lightweight, phone-based V-Mail app makes it possible to annotate data by adding captions to the video. These spatial annotations are then immediately accessible to team members running high-end 3D graphics visualization systems that also include a V-Mail client, implemented as a plugin. Results and evaluation from applying V-Mail to assist communication within an interdisciplinary science team studying Antarctic ice sheets confirm the utility of the asynchronous, cross-platform collaborative framework while also highlighting some current limitations and opportunities for future work.

IEEE VIS 2024 Content: V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

Jung Who Nam -

Tobias Isenberg -

Daniel F. Keefe -

Room: To Be Announced

Keywords

Human-computer interaction, visualization of scientific 3D data, communication, storytelling, immersive analytics

Abstract

We present V-Mail, a framework of cross-platform applications, interactive techniques, and communication protocols for improved multi-person correspondence about spatial 3D datasets. Inspired by the daily use of e-mail, V-Mail seeks to enable a similar style of rapid, multi-person communication accessible on any device; however, it aims to do this in the new context of spatial 3D communication, where limited access to 3D graphics hardware typically prevents such communication. The approach integrates visual data storytelling with data exploration, spatial annotations, and animated transitions. V-Mail ``data stories'' are exported in a standard video file format to establish a common baseline level of access on (almost) any device. The V-Mail framework also includes a series of complementary client applications and plugins that enable different degrees of story co-authoring and data exploration, adjusted automatically to match the capabilities of various devices. A lightweight, phone-based V-Mail app makes it possible to annotate data by adding captions to the video. These spatial annotations are then immediately accessible to team members running high-end 3D graphics visualization systems that also include a V-Mail client, implemented as a plugin. Results and evaluation from applying V-Mail to assist communication within an interdisciplinary science team studying Antarctic ice sheets confirm the utility of the asynchronous, cross-platform collaborative framework while also highlighting some current limitations and opportunities for future work.

IEEE VIS 2024 Content: How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

Room: To Be Announced

Keywords

Data Visualization, Automatic Visualization, Narrative Visualization, Design Space, Authoring Tools, Survey

Abstract

In recent years, narrative visualization has gained much attention. Researchers have proposed different design spaces for various narrative visualization genres and scenarios to facilitate the creation process. As users' needs grow and automation technologies advance, increasingly more tools have been designed and developed. In this study, we summarized six genres of narrative visualization (annotated charts, infographics, timelines & storylines, data comics, scrollytelling & slideshow, and data videos) based on previous research and four types of tools (design spaces, authoring tools, ML/AI-supported tools and ML/AI-generator tools) based on the intelligence and automation level of the tools. We surveyed 105 papers and tools to study how automation can progressively engage in visualization design and narrative processes to help users easily create narrative visualizations. This research aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.

IEEE VIS 2024 Content: How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

Qing Chen -

Shixiong Cao -

Jiazhe Wang -

Nan Cao -

Room: To Be Announced

Keywords

Data Visualization, Automatic Visualization, Narrative Visualization, Design Space, Authoring Tools, Survey

Abstract

In recent years, narrative visualization has gained much attention. Researchers have proposed different design spaces for various narrative visualization genres and scenarios to facilitate the creation process. As users' needs grow and automation technologies advance, increasingly more tools have been designed and developed. In this study, we summarized six genres of narrative visualization (annotated charts, infographics, timelines & storylines, data comics, scrollytelling & slideshow, and data videos) based on previous research and four types of tools (design spaces, authoring tools, ML/AI-supported tools and ML/AI-generator tools) based on the intelligence and automation level of the tools. We surveyed 105 papers and tools to study how automation can progressively engage in visualization design and narrative processes to help users easily create narrative visualizations. This research aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.

IEEE VIS 2024 Content: Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Room: To Be Announced

Keywords

Task Analysis, Symbols, Data Visualization, Sociology, Visualization, Switches, Mice, Cartogram, Geovisualization, Interactive Data Exploration, Quantitative Evaluation

Abstract

A contiguous area cartogram is a geographic map in which the area of each region is proportional to numerical data (e.g., population size) while keeping neighboring regions connected. In this study, we investigated whether value-to-area legends (square symbols next to the values represented by the squares' areas) and grid lines aid map readers in making better area judgments. We conducted an experiment to determine the accuracy, speed, and confidence with which readers infer numerical data values for the mapped regions. We found that, when only informed about the total numerical value represented by the whole cartogram without any legend, the distribution of estimates for individual regions was centered near the true value with substantial spread. Legends with grid lines significantly reduced the spread but led to a tendency to underestimate the values. Comparing differences between regions or between cartograms revealed that legends and grid lines slowed the estimation without improving accuracy. However, participants were more likely to complete the tasks when legends and grid lines were present, particularly when the area units represented by these features could be interactively selected. We recommend considering the cartogram's use case and purpose before deciding whether to include grid lines or an interactive legend.

IEEE VIS 2024 Content: Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Kelvin L. T. Fung -

Simon T. Perrault -

Michael T. Gastner -

Room: To Be Announced

Keywords

Task Analysis, Symbols, Data Visualization, Sociology, Visualization, Switches, Mice, Cartogram, Geovisualization, Interactive Data Exploration, Quantitative Evaluation

Abstract

A contiguous area cartogram is a geographic map in which the area of each region is proportional to numerical data (e.g., population size) while keeping neighboring regions connected. In this study, we investigated whether value-to-area legends (square symbols next to the values represented by the squares' areas) and grid lines aid map readers in making better area judgments. We conducted an experiment to determine the accuracy, speed, and confidence with which readers infer numerical data values for the mapped regions. We found that, when only informed about the total numerical value represented by the whole cartogram without any legend, the distribution of estimates for individual regions was centered near the true value with substantial spread. Legends with grid lines significantly reduced the spread but led to a tendency to underestimate the values. Comparing differences between regions or between cartograms revealed that legends and grid lines slowed the estimation without improving accuracy. However, participants were more likely to complete the tasks when legends and grid lines were present, particularly when the area units represented by these features could be interactively selected. We recommend considering the cartogram's use case and purpose before deciding whether to include grid lines or an interactive legend.

IEEE VIS 2024 Content: More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism

More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism

Room: To Be Announced

Keywords

Computational journalism, data visualization, data-driven storytelling, journalism

Abstract

Data visualization and journalism are deeply connected. From early infographics to recent data-driven storytelling, visualization has become an integrated part of contemporary journalism, primarily as a communication artifact to inform the general public. Data journalism, harnessing the power of data visualization, has emerged as a bridge between the growing volume of data and our society. Visualization research that centers around data storytelling has sought to understand and facilitate such journalistic endeavors. However, a recent metamorphosis in journalism has brought broader challenges and opportunities that extend beyond mere communication of data. We present this article to enhance our understanding of such transformations and thus broaden visualization research's scope and practical contribution to this evolving field. We first survey recent significant shifts, emerging challenges, and computational practices in journalism. We then summarize six roles of computing in journalism and their implications. Based on these implications, we provide propositions for visualization research concerning each role. Ultimately, by mapping the roles and propositions onto a proposed ecological model and contextualizing existing visualization research, we surface seven general topics and a series of research agendas that can guide future visualization research at this intersection.

IEEE VIS 2024 Content: More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism

More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism

Yu Fu -

John Stasko -

Room: To Be Announced

Keywords

Computational journalism, data visualization, data-driven storytelling, journalism

Abstract

Data visualization and journalism are deeply connected. From early infographics to recent data-driven storytelling, visualization has become an integrated part of contemporary journalism, primarily as a communication artifact to inform the general public. Data journalism, harnessing the power of data visualization, has emerged as a bridge between the growing volume of data and our society. Visualization research that centers around data storytelling has sought to understand and facilitate such journalistic endeavors. However, a recent metamorphosis in journalism has brought broader challenges and opportunities that extend beyond mere communication of data. We present this article to enhance our understanding of such transformations and thus broaden visualization research's scope and practical contribution to this evolving field. We first survey recent significant shifts, emerging challenges, and computational practices in journalism. We then summarize six roles of computing in journalism and their implications. Based on these implications, we provide propositions for visualization research concerning each role. Ultimately, by mapping the roles and propositions onto a proposed ecological model and contextualizing existing visualization research, we surface seven general topics and a series of research agendas that can guide future visualization research at this intersection.

IEEE VIS 2024 Content: What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

Room: To Be Announced

Keywords

comparison, perception, visual grouping, bar charts, verbal conclusions.

Abstract

Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.

IEEE VIS 2024 Content: What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

Cindy Xiong Bearfield -

Chase Stokes -

Andrew Lovett -

Steven Franconeri -

Room: To Be Announced

Keywords

comparison, perception, visual grouping, bar charts, verbal conclusions.

Abstract

Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.

IEEE VIS 2024 Content: This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

Room: To Be Announced

Keywords

Immersive Analytics, Data Transformation, Data Science, Interaction, Empirical Study, Virtual/Augmented/Mixed Reality

Abstract

Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.

IEEE VIS 2024 Content: This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

Sungwon In -

Tica Lin -

Chris North -

Hanspeter Pfister -

Yalong Yang -

Room: To Be Announced

Keywords

Immersive Analytics, Data Transformation, Data Science, Interaction, Empirical Study, Virtual/Augmented/Mixed Reality

Abstract

Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.

IEEE VIS 2024 Content: Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Room: To Be Announced

Keywords

Visualization, visual analytics, machine learning, comparing ML predictions, human-AI teaming, plant biology, cell lineage

Abstract

We visualize the predictions of multiple machine learning models to help biologists as they interactively make decisions about cell lineage---the development of a (plant) embryo from a single ovum cell. Based on a confocal microscopy dataset, traditionally biologists manually constructed the cell lineage, starting from this observation and reasoning backward in time to establish their inheritance. To speed up this tedious process, we make use of machine learning (ML) models trained on a database of manually established cell lineages to assist the biologist in cell assignment. Most biologists, however, are not familiar with ML, nor is it clear to them which model best predicts the embryo's development. We thus have developed a visualization system that is designed to support biologists in exploring and comparing ML models, checking the model predictions, detecting possible ML model mistakes, and deciding on the most likely embryo development. To evaluate our proposed system, we deployed our interface with six biologists in an observational study. Our results show that the visual representations of machine learning are easily understandable, and our tool, LineageD+, could potentially increase biologists' working efficiency and enhance the understanding of embryos.

IEEE VIS 2024 Content: Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Jiayi Hong -

Ross Maciejewski -

Alain Trubuil -

Tobias Isenberg -

Room: To Be Announced

Keywords

Visualization, visual analytics, machine learning, comparing ML predictions, human-AI teaming, plant biology, cell lineage

Abstract

We visualize the predictions of multiple machine learning models to help biologists as they interactively make decisions about cell lineage---the development of a (plant) embryo from a single ovum cell. Based on a confocal microscopy dataset, traditionally biologists manually constructed the cell lineage, starting from this observation and reasoning backward in time to establish their inheritance. To speed up this tedious process, we make use of machine learning (ML) models trained on a database of manually established cell lineages to assist the biologist in cell assignment. Most biologists, however, are not familiar with ML, nor is it clear to them which model best predicts the embryo's development. We thus have developed a visualization system that is designed to support biologists in exploring and comparing ML models, checking the model predictions, detecting possible ML model mistakes, and deciding on the most likely embryo development. To evaluate our proposed system, we deployed our interface with six biologists in an observational study. Our results show that the visual representations of machine learning are easily understandable, and our tool, LineageD+, could potentially increase biologists' working efficiency and enhance the understanding of embryos.

IEEE VIS 2024 Content: SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

Room: To Be Announced

Abstract

A multitude of studies have been conducted on graph drawing, but many existing methods only focus on optimizing a single aesthetic aspect of graph layouts. There are a few existing methods that attempt to develop a flexible solution for optimizing different aesthetic aspects measured by different aesthetic criteria. Furthermore, thanks to the significant advance in deep learning techniques, several deep learning-based layout methods were proposed recently, which have demonstrated the advantages of the deep learning approaches for graph drawing. However, none of these existing methods can be directly applied to optimizing non-differentiable criteria without special accommodation. In this work, we propose a novel Generative Adversarial Network (GAN) based deep learning framework for graph drawing, called SmartGD, which can optimize any quantitative aesthetic goals even though they are non-differentiable. In the cases where the aesthetic goal is too abstract to be described mathematically, SmartGD can draw graphs in a similar style as a collection of good layout examples, which might be selected by humans based on the abstract aesthetic goal. To demonstrate the effectiveness and efficiency of SmartGD, we conduct experiments on minimizing stress, minimizing edge crossing, maximizing crossing angle, and a combination of multiple aesthetics. Compared with several popular graph drawing algorithms, the experimental results show that SmartGD achieves good performance both quantitatively and qualitatively.

IEEE VIS 2024 Content: SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

Xiaoqi Wang -

Kevin Yen -

Yifan Hu -

Han-Wei Shen -

Room: To Be Announced

Abstract

A multitude of studies have been conducted on graph drawing, but many existing methods only focus on optimizing a single aesthetic aspect of graph layouts. There are a few existing methods that attempt to develop a flexible solution for optimizing different aesthetic aspects measured by different aesthetic criteria. Furthermore, thanks to the significant advance in deep learning techniques, several deep learning-based layout methods were proposed recently, which have demonstrated the advantages of the deep learning approaches for graph drawing. However, none of these existing methods can be directly applied to optimizing non-differentiable criteria without special accommodation. In this work, we propose a novel Generative Adversarial Network (GAN) based deep learning framework for graph drawing, called SmartGD, which can optimize any quantitative aesthetic goals even though they are non-differentiable. In the cases where the aesthetic goal is too abstract to be described mathematically, SmartGD can draw graphs in a similar style as a collection of good layout examples, which might be selected by humans based on the abstract aesthetic goal. To demonstrate the effectiveness and efficiency of SmartGD, we conduct experiments on minimizing stress, minimizing edge crossing, maximizing crossing angle, and a combination of multiple aesthetics. Compared with several popular graph drawing algorithms, the experimental results show that SmartGD achieves good performance both quantitatively and qualitatively.

IEEE VIS 2024 Content: On Network Structural and Temporal Encodings: A Space and Time Odyssey

On Network Structural and Temporal Encodings: A Space and Time Odyssey

Room: To Be Announced

Abstract

The dynamic network visualization design space consists of two major dimensions: network structural and temporal representation. As more techniques are developed and published, a clear need for evaluation and experimental comparisons between them emerges. Most studies explore the temporal dimension and diverse interaction techniques supporting the participants, focusing on a single structural representation. Empirical evidence about performance and preference for different visualization approaches is scattered over different studies, experimental settings, and tasks. This paper aims to comprehensively investigate the dynamic network visualization design space in two evaluations. First, a controlled study assessing participants' response times, accuracy, and preferences for different combinations of network structural and temporal representations on typical dynamic network exploration tasks, with and without the support of standard interaction methods. Second, the best-performing combinations from the first study are enhanced based on participants' feedback and evaluated in a heuristic-based qualitative study with visualization experts on a real-world network. Our results highlight node-link with animation and playback controls as the best-performing combination and the most preferred based on ratings. Matrices achieve similar performance to node-link in the first study but have considerably lower scores in our second evaluation. Similarly, juxtaposition exhibits evident scalability issues in more realistic analysis contexts.

IEEE VIS 2024 Content: On Network Structural and Temporal Encodings: A Space and Time Odyssey

On Network Structural and Temporal Encodings: A Space and Time Odyssey

Velitchko Filipov -

Alessio Arleo -

Markus Bögl -

Silvia Miksch -

Room: To Be Announced

Abstract

The dynamic network visualization design space consists of two major dimensions: network structural and temporal representation. As more techniques are developed and published, a clear need for evaluation and experimental comparisons between them emerges. Most studies explore the temporal dimension and diverse interaction techniques supporting the participants, focusing on a single structural representation. Empirical evidence about performance and preference for different visualization approaches is scattered over different studies, experimental settings, and tasks. This paper aims to comprehensively investigate the dynamic network visualization design space in two evaluations. First, a controlled study assessing participants' response times, accuracy, and preferences for different combinations of network structural and temporal representations on typical dynamic network exploration tasks, with and without the support of standard interaction methods. Second, the best-performing combinations from the first study are enhanced based on participants' feedback and evaluated in a heuristic-based qualitative study with visualization experts on a real-world network. Our results highlight node-link with animation and playback controls as the best-performing combination and the most preferred based on ratings. Matrices achieve similar performance to node-link in the first study but have considerably lower scores in our second evaluation. Similarly, juxtaposition exhibits evident scalability issues in more realistic analysis contexts.

IEEE VIS 2024 Content: AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

Room: To Be Announced

Keywords

Visualization Recommendation, Logical Reasoning, Data Visualization, Knowledge Graph

Abstract

Automated visualization recommendation facilitates the rapid creation of effective visualizations, which is especially beneficial for users with limited time and limited knowledge of data visualization. There is an increasing trend in leveraging machine learning (ML) techniques to achieve an end-to-end visualization recommendation. However, existing ML-based approaches implicitly assume that there is only one appropriate visualization for a specific dataset, which is often not true for real applications. Also, they often work like a black box, and are difficult for users to understand the reasons for recommending specific visualizations. To fill the research gap, we propose AdaVis, an adaptive and explainable approach to recommend one or multiple appropriate visualizations for a tabular dataset. It leverages a box embedding-based knowledge graph to well model the possible one-to-many mapping relations among different entities (i.e., data features, dataset columns, datasets, and visualization choices). The embeddings of the entities and relations can be learned from dataset-visualization pairs. Also, AdaVis incorporates the attention mechanism into the inference framework. Attention can indicate the relative importance of data features for a dataset and provide fine-grained explainability. Our extensive evaluations through quantitative metric evaluations, case studies, and user interviews demonstrate the effectiveness of AdaVis.

IEEE VIS 2024 Content: AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

Songheng Zhang -

Yong Wang -

Haotian Li -

Huamin Qu -

Room: To Be Announced

Keywords

Visualization Recommendation, Logical Reasoning, Data Visualization, Knowledge Graph

Abstract

Automated visualization recommendation facilitates the rapid creation of effective visualizations, which is especially beneficial for users with limited time and limited knowledge of data visualization. There is an increasing trend in leveraging machine learning (ML) techniques to achieve an end-to-end visualization recommendation. However, existing ML-based approaches implicitly assume that there is only one appropriate visualization for a specific dataset, which is often not true for real applications. Also, they often work like a black box, and are difficult for users to understand the reasons for recommending specific visualizations. To fill the research gap, we propose AdaVis, an adaptive and explainable approach to recommend one or multiple appropriate visualizations for a tabular dataset. It leverages a box embedding-based knowledge graph to well model the possible one-to-many mapping relations among different entities (i.e., data features, dataset columns, datasets, and visualization choices). The embeddings of the entities and relations can be learned from dataset-visualization pairs. Also, AdaVis incorporates the attention mechanism into the inference framework. Attention can indicate the relative importance of data features for a dataset and provide fine-grained explainability. Our extensive evaluations through quantitative metric evaluations, case studies, and user interviews demonstrate the effectiveness of AdaVis.

IEEE VIS 2024 Content: GeoLinter: A Linting Framework for Choropleth Maps

GeoLinter: A Linting Framework for Choropleth Maps

Room: To Be Announced

Keywords

Data visualization , Image color analysis , Geology , Recommender systems , Guidelines , Bars , Visualization Author Keywords: Automated visualization design , choropleth maps , visualization linting , visualization recommendation

Abstract

Visualization linting is a proven effective tool in assisting users to follow established visualization guidelines. Despite its success, visualization linting for choropleth maps, one of the most popular visualizations on the internet, has yet to be investigated. In this paper, we present GeoLinter, a linting framework for choropleth maps that assists in creating accurate and robust maps. Based on a set of design guidelines and metrics drawing upon a collection of best practices from the cartographic literature, GeoLinter detects potentially suboptimal design decisions and provides further recommendations on design improvement with explanations at each step of the design process. We perform a validation study to evaluate the proposed framework's functionality with respect to identifying and fixing errors and apply its results to improve the robustness of GeoLinter. Finally, we demonstrate the effectiveness of the GeoLinter - validated through empirical studies - by applying it to a series of case studies using real-world datasets.

IEEE VIS 2024 Content: GeoLinter: A Linting Framework for Choropleth Maps

GeoLinter: A Linting Framework for Choropleth Maps

Fan Lei -

Arlen Fan -

Alan M. MacEachren -

Ross Maciejewski -

Room: To Be Announced

Keywords

Data visualization , Image color analysis , Geology , Recommender systems , Guidelines , Bars , Visualization Author Keywords: Automated visualization design , choropleth maps , visualization linting , visualization recommendation

Abstract

Visualization linting is a proven effective tool in assisting users to follow established visualization guidelines. Despite its success, visualization linting for choropleth maps, one of the most popular visualizations on the internet, has yet to be investigated. In this paper, we present GeoLinter, a linting framework for choropleth maps that assists in creating accurate and robust maps. Based on a set of design guidelines and metrics drawing upon a collection of best practices from the cartographic literature, GeoLinter detects potentially suboptimal design decisions and provides further recommendations on design improvement with explanations at each step of the design process. We perform a validation study to evaluate the proposed framework's functionality with respect to identifying and fixing errors and apply its results to improve the robustness of GeoLinter. Finally, we demonstrate the effectiveness of the GeoLinter - validated through empirical studies - by applying it to a series of case studies using real-world datasets.

IEEE VIS 2024 Content: Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Room: To Be Announced

Keywords

Design Probes, Interactive Machine Learning, Model Steering, Semantic Interaction

Abstract

Visual and interactive machine learning systems (IML) are becoming ubiquitous as they empower individuals with varied machine learning expertise to analyze data. However, it remains complex to align interactions with visual marks to a user’s intent for steering machine learning models. We explore using data and visual design probes to elicit users’ desired interactions to steer ML models via visual encodings within IML interfaces. We conducted an elicitation study with 20 data analysts with varying expertise in ML. We summarize our findings as pairs of target-interaction, which we compare to prior systems to assess the utility of the probes. We additionally surfaced insights about factors influencing how and why participants chose to interact with visual encodings, including refraining from interacting. Finally, we reflect on the value of gathering such formative empirical evidence via data and visual design probes ahead of developing IML prototypes.

IEEE VIS 2024 Content: Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Anamaria Crisan -

Maddie Shang -

Eric Brochu -

Room: To Be Announced

Keywords

Design Probes, Interactive Machine Learning, Model Steering, Semantic Interaction

Abstract

Visual and interactive machine learning systems (IML) are becoming ubiquitous as they empower individuals with varied machine learning expertise to analyze data. However, it remains complex to align interactions with visual marks to a user’s intent for steering machine learning models. We explore using data and visual design probes to elicit users’ desired interactions to steer ML models via visual encodings within IML interfaces. We conducted an elicitation study with 20 data analysts with varying expertise in ML. We summarize our findings as pairs of target-interaction, which we compare to prior systems to assess the utility of the probes. We additionally surfaced insights about factors influencing how and why participants chose to interact with visual encodings, including refraining from interacting. Finally, we reflect on the value of gathering such formative empirical evidence via data and visual design probes ahead of developing IML prototypes.

IEEE VIS 2024 Content: Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Room: To Be Announced

Keywords

Multimodal interaction, collaborative work, large vertical displays, elicitation study, spatio-temporal data

Abstract

We examined user preferences to combine multiple interaction modalities for collaborative interaction with data shown on large vertical displays. Large vertical displays facilitate visual data exploration and allow the use of diverse interaction modalities by multiple users at different distances from the screen. Yet, how to offer multiple interaction modalities is a non-trivial problem. We conducted an elicitation study with 20 participants that generated 1015 interaction proposals combining touch, speech, pen, and mid-air gestures. Given the opportunity to interact using these four 13 modalities, participants preferred speech interaction in 10 of 15 14 low-level tasks and direct manipulation for straightforward tasks 15 such as showing a tooltip or selecting. In contrast to previous work, 16 participants most favored unimodal and personal interactions. We 17 identified what we call collaborative synonyms among their interaction proposals and found that pairs of users collaborated either unimodally and simultaneously or multimodally and sequentially. We provide insights into how end-users associate visual exploration tasks with certain modalities and how they collaborate at different interaction distances using specific interaction modalities. The supplemental material is available at https://osf.io/m8zuh.

IEEE VIS 2024 Content: Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Gabriela Molina León -

Petra Isenberg -

Andreas Breiter -

Room: To Be Announced

Keywords

Multimodal interaction, collaborative work, large vertical displays, elicitation study, spatio-temporal data

Abstract

We examined user preferences to combine multiple interaction modalities for collaborative interaction with data shown on large vertical displays. Large vertical displays facilitate visual data exploration and allow the use of diverse interaction modalities by multiple users at different distances from the screen. Yet, how to offer multiple interaction modalities is a non-trivial problem. We conducted an elicitation study with 20 participants that generated 1015 interaction proposals combining touch, speech, pen, and mid-air gestures. Given the opportunity to interact using these four 13 modalities, participants preferred speech interaction in 10 of 15 14 low-level tasks and direct manipulation for straightforward tasks 15 such as showing a tooltip or selecting. In contrast to previous work, 16 participants most favored unimodal and personal interactions. We 17 identified what we call collaborative synonyms among their interaction proposals and found that pairs of users collaborated either unimodally and simultaneously or multimodally and sequentially. We provide insights into how end-users associate visual exploration tasks with certain modalities and how they collaborate at different interaction distances using specific interaction modalities. The supplemental material is available at https://osf.io/m8zuh.

IEEE VIS 2024 Content: Interpreting High-Dimensional Projections With Capacity

Interpreting High-Dimensional Projections With Capacity

Room: To Be Announced

Abstract

Dimensionality reduction (DR) algorithms are diverse and widely used for analyzing high-dimensional data. Various metrics and tools have been proposed to evaluate and interpret the DR results. However, most metrics and methods fail to be well generalized to measure any DR results from the perspective of original distribution fidelity or lack interactive exploration of DR results. There is still a need for more intuitive and quantitative analysis to interactively explore high-dimensional data and improve interpretability. We propose a metric and a generalized algorithm-agnostic approach based on the concept of capacity to evaluate and analyze the DR results. Based on our approach, we develop a visual analytic system HiLow for exploring high-dimensional data and projections. We also propose a mixed-initiative recommendation algorithm that assists users in interactively DR results manipulation. Users can compare the differences in data distribution after the interaction through HiLow. Furthermore, we propose a novel visualization design focusing on quantitative analysis of differences between high and low-dimensional data distributions. Finally, through user study and case studies, we validate the effectiveness of our approach and system in enhancing the interpretability of projections and analyzing the distribution of high and low-dimensional data.

IEEE VIS 2024 Content: Interpreting High-Dimensional Projections With Capacity

Interpreting High-Dimensional Projections With Capacity

Yang Zhang -

Jisheng Liu -

Chufan Lai -

Yuan Zhou -

Siming Chen -

Room: To Be Announced

Abstract

Dimensionality reduction (DR) algorithms are diverse and widely used for analyzing high-dimensional data. Various metrics and tools have been proposed to evaluate and interpret the DR results. However, most metrics and methods fail to be well generalized to measure any DR results from the perspective of original distribution fidelity or lack interactive exploration of DR results. There is still a need for more intuitive and quantitative analysis to interactively explore high-dimensional data and improve interpretability. We propose a metric and a generalized algorithm-agnostic approach based on the concept of capacity to evaluate and analyze the DR results. Based on our approach, we develop a visual analytic system HiLow for exploring high-dimensional data and projections. We also propose a mixed-initiative recommendation algorithm that assists users in interactively DR results manipulation. Users can compare the differences in data distribution after the interaction through HiLow. Furthermore, we propose a novel visualization design focusing on quantitative analysis of differences between high and low-dimensional data distributions. Finally, through user study and case studies, we validate the effectiveness of our approach and system in enhancing the interpretability of projections and analyzing the distribution of high and low-dimensional data.

IEEE VIS 2024 Content: What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

Room: To Be Announced

Abstract

Researchers have derived many theoretical models for specifying users’ insights as they interact with a visualization system. These representations are essential for understanding the insight discovery process, such as when inferring user interaction patterns that lead to insight or assessing the rigor of reported insights. However, theoretical models can be difficult to apply to existing tools and user studies, often due to discrepancies in how insight and its constituent parts are defined. This paper calls attention to the consistent structures that recur across the visualization literature and describes how they connect multiple theoretical representations of insight. We synthesize a unified formalism for insights using these structures, enabling a wider audience of researchers and developers to adopt the corresponding models. Through a series of theoretical case studies, we use our formalism to compare and contrast existing theories, revealing interesting research challenges in reasoning about a user's domain knowledge and leveraging synergistic approaches in data mining and data management research.

IEEE VIS 2024 Content: What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

Leilani Battle -

Alvitta Ottley -

Room: To Be Announced

Abstract

Researchers have derived many theoretical models for specifying users’ insights as they interact with a visualization system. These representations are essential for understanding the insight discovery process, such as when inferring user interaction patterns that lead to insight or assessing the rigor of reported insights. However, theoretical models can be difficult to apply to existing tools and user studies, often due to discrepancies in how insight and its constituent parts are defined. This paper calls attention to the consistent structures that recur across the visualization literature and describes how they connect multiple theoretical representations of insight. We synthesize a unified formalism for insights using these structures, enabling a wider audience of researchers and developers to adopt the corresponding models. Through a series of theoretical case studies, we use our formalism to compare and contrast existing theories, revealing interesting research challenges in reasoning about a user's domain knowledge and leveraging synergistic approaches in data mining and data management research.

IEEE VIS 2024 Content: Wasserstein Dictionaries of Persistence Diagrams

Wasserstein Dictionaries of Persistence Diagrams

Room: To Be Announced

Keywords

Topological data analysis, ensemble data, persistence diagrams

Abstract

This paper presents a computational framework for the concise encoding of an ensemble of persistence diagrams, in the form of weighted Wasserstein barycenters [100], [102] of a dictionary of atom diagrams. We introduce a multi-scale gradient descent approach for the efficient resolution of the corresponding minimization problem, which interleaves the optimization of the barycenter weights with the optimization of the atom diagrams. Our approach leverages the analytic expressions for the gradient of both sub-problems to ensure fast iterations and it additionally exploits shared-memory parallelism. Extensive experiments on public ensembles demonstrate the efficiency of our approach, with Wasserstein dictionary computations in the orders of minutes for the largest examples. We show the utility of our contributions in two applications. First, we apply Wassserstein dictionaries to data reduction and reliably compress persistence diagrams by concisely representing them with their weights in the dictionary. Second, we present a dimensionality reduction framework based on a Wasserstein dictionary defined with a small number of atoms (typically three) and encode the dictionary as a low dimensional simplex embedded in a visual space (typically in 2D). In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.

IEEE VIS 2024 Content: Wasserstein Dictionaries of Persistence Diagrams

Wasserstein Dictionaries of Persistence Diagrams

Keanu Sisouk -

Julie Delon -

Julien Tierny -

Room: To Be Announced

Keywords

Topological data analysis, ensemble data, persistence diagrams

Abstract

This paper presents a computational framework for the concise encoding of an ensemble of persistence diagrams, in the form of weighted Wasserstein barycenters [100], [102] of a dictionary of atom diagrams. We introduce a multi-scale gradient descent approach for the efficient resolution of the corresponding minimization problem, which interleaves the optimization of the barycenter weights with the optimization of the atom diagrams. Our approach leverages the analytic expressions for the gradient of both sub-problems to ensure fast iterations and it additionally exploits shared-memory parallelism. Extensive experiments on public ensembles demonstrate the efficiency of our approach, with Wasserstein dictionary computations in the orders of minutes for the largest examples. We show the utility of our contributions in two applications. First, we apply Wassserstein dictionaries to data reduction and reliably compress persistence diagrams by concisely representing them with their weights in the dictionary. Second, we present a dimensionality reduction framework based on a Wasserstein dictionary defined with a small number of atoms (typically three) and encode the dictionary as a low dimensional simplex embedded in a visual space (typically in 2D). In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.

IEEE VIS 2024 Content: Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Room: To Be Announced

Keywords

Camera navigation, flooding simulation visualization, immersive visualization, mixed reality

Abstract

We present Submerse, an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.

IEEE VIS 2024 Content: Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Saeed Boorboor -

Yoonsang Kim -

Ping Hu -

Josef Moses -

Brian Colle -

Arie E. Kaufman -

Room: To Be Announced

Keywords

Camera navigation, flooding simulation visualization, immersive visualization, mixed reality

Abstract

We present Submerse, an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.

IEEE VIS 2024 Content: QuantumEyes: Towards Better Interpretability of Quantum Circuits

QuantumEyes: Towards Better Interpretability of Quantum Circuits

Room: To Be Announced

Keywords

Data visualization, design study, interpretability, quantum computing.

Abstract

Quantum computing offers significant speedup compared to classical computing, which has led to a growing interest among users in learning and applying quantum computing across various applications. However, quantum circuits, which are fundamental for implementing quantum algorithms, can be challenging for users to understand due to their underlying logic, such as the temporal evolution of quantum states and the effect of quantum amplitudes on the probability of basis quantum states. To fill this research gap, we propose QuantumEyes, an interactive visual analytics system to enhance the interpretability of quantum circuits through both global and local levels. For the global-level analysis, we present three coupled visualizations to delineate the changes of quantum states and the underlying reasons: a Probability Summary View to overview the probability evolution of quantum states; a State Evolution View to enable an in-depth analysis of the influence of quantum gates on the quantum states; a Gate Explanation View to show the individual qubit states and facilitate a better understanding of the effect of quantum gates. For the local-level analysis, we design a novel geometrical visualization dandelion chart to explicitly reveal how the quantum amplitudes affect the probability of the quantum state. We thoroughly evaluated QuantumEyes as well as the novel dandelion chart integrated into it through two case studies on different types of quantum algorithms and in-depth expert interviews with 12 domain experts. The results demonstrate the effectiveness and usability of our approach in enhancing the interpretability of quantum circuits.

IEEE VIS 2024 Content: QuantumEyes: Towards Better Interpretability of Quantum Circuits

QuantumEyes: Towards Better Interpretability of Quantum Circuits

Shaolun Ruan -

Qiang Guan -

Paul Griffin -

Ying Mao -

Yong Wang -

Room: To Be Announced

Keywords

Data visualization, design study, interpretability, quantum computing.

Abstract

Quantum computing offers significant speedup compared to classical computing, which has led to a growing interest among users in learning and applying quantum computing across various applications. However, quantum circuits, which are fundamental for implementing quantum algorithms, can be challenging for users to understand due to their underlying logic, such as the temporal evolution of quantum states and the effect of quantum amplitudes on the probability of basis quantum states. To fill this research gap, we propose QuantumEyes, an interactive visual analytics system to enhance the interpretability of quantum circuits through both global and local levels. For the global-level analysis, we present three coupled visualizations to delineate the changes of quantum states and the underlying reasons: a Probability Summary View to overview the probability evolution of quantum states; a State Evolution View to enable an in-depth analysis of the influence of quantum gates on the quantum states; a Gate Explanation View to show the individual qubit states and facilitate a better understanding of the effect of quantum gates. For the local-level analysis, we design a novel geometrical visualization dandelion chart to explicitly reveal how the quantum amplitudes affect the probability of the quantum state. We thoroughly evaluated QuantumEyes as well as the novel dandelion chart integrated into it through two case studies on different types of quantum algorithms and in-depth expert interviews with 12 domain experts. The results demonstrate the effectiveness and usability of our approach in enhancing the interpretability of quantum circuits.

IEEE VIS 2024 Content: SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

Room: To Be Announced

Keywords

Urban data, semantic textual similarity, point of interest, density map, visual analytics, visualization design

Abstract

As urban populations grow, effectively accessing urban performance measures such as livability and comfort becomes increasingly important due to their significant socioeconomic impacts. While Point of Interest (POI) data has been utilized for various applications in location-based services, its potential for urban performance analytics remains unexplored. In this paper, we present SenseMap, a novel approach for analyzing urban performance by leveraging POI data as a semantic representation of urban functions. We quantify the contribution of POIs to different urban performance measures by calculating semantic textual similarities on our constructed corpus. We propose Semantic-adaptive Kernel Density Estimation which takes into account POIs’ influential areas across different Traffic Analysis Zones and semantic contributions to generate semantic density maps for measures. We design and implement a feature-rich, real-time visual analytics system for users to explore the urban performance of their surroundings. Evaluations with human judgment and reference data demonstrate the feasibility and validity of our method. Usage scenarios and user studies demonstrate the capability, usability, and explainability of our system.

IEEE VIS 2024 Content: SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

Juntong Chen -

Qiaoyun Huang -

Changbo Wang -

Chenhui Li -

Room: To Be Announced

Keywords

Urban data, semantic textual similarity, point of interest, density map, visual analytics, visualization design

Abstract

As urban populations grow, effectively accessing urban performance measures such as livability and comfort becomes increasingly important due to their significant socioeconomic impacts. While Point of Interest (POI) data has been utilized for various applications in location-based services, its potential for urban performance analytics remains unexplored. In this paper, we present SenseMap, a novel approach for analyzing urban performance by leveraging POI data as a semantic representation of urban functions. We quantify the contribution of POIs to different urban performance measures by calculating semantic textual similarities on our constructed corpus. We propose Semantic-adaptive Kernel Density Estimation which takes into account POIs’ influential areas across different Traffic Analysis Zones and semantic contributions to generate semantic density maps for measures. We design and implement a feature-rich, real-time visual analytics system for users to explore the urban performance of their surroundings. Evaluations with human judgment and reference data demonstrate the feasibility and validity of our method. Usage scenarios and user studies demonstrate the capability, usability, and explainability of our system.

IEEE VIS 2024 Content: Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Room: To Be Announced

Keywords

Visual analytics, Data integration, User interface design, Integration strategies, Analytical behaviors.

Abstract

Data integration is often performed to consolidate information from multiple disparate data sources during visual data analysis. However, integration operations are usually separate from visual analytics operations such as encode and filter in both interface design and empirical research. We conducted a preliminary user study to investigate whether and how data integration should be incorporated directly into the visual analytics process. We used two interface alternatives featuring contrasting approaches to the data preparation and analysis workflow: manual file-based ex-situ integration as a separate step from visual analytics operations; and automatic UI-based in-situ integration merged with visual analytics operations. Participants were asked to complete specific and free-form tasks with each interface, browsing for patterns, generating insights, and summarizing relationships between attributes distributed across multiple files. Analyzing participants' interactions and feedback, we found both task completion time and total interactions to be similar across interfaces and tasks, as well as unique integration strategies between interfaces and emergent behaviors related to satisficing and cognitive bias. Participants' time spent and interactions revealed that in-situ integration enabled users to spend more time on analysis tasks compared with ex-situ integration. Participants' integration strategies and analytical behaviors revealed differences in interface usage for generating and tracking hypotheses and insights. With these results, we synthesized preliminary guidelines for designing future visual analytics interfaces that can support integrating attributes throughout an active analysis process.

IEEE VIS 2024 Content: Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Adam Coscia -

Ashley Suh -

Remco Chang -

Alex Endert -

Room: To Be Announced

Keywords

Visual analytics, Data integration, User interface design, Integration strategies, Analytical behaviors.

Abstract

Data integration is often performed to consolidate information from multiple disparate data sources during visual data analysis. However, integration operations are usually separate from visual analytics operations such as encode and filter in both interface design and empirical research. We conducted a preliminary user study to investigate whether and how data integration should be incorporated directly into the visual analytics process. We used two interface alternatives featuring contrasting approaches to the data preparation and analysis workflow: manual file-based ex-situ integration as a separate step from visual analytics operations; and automatic UI-based in-situ integration merged with visual analytics operations. Participants were asked to complete specific and free-form tasks with each interface, browsing for patterns, generating insights, and summarizing relationships between attributes distributed across multiple files. Analyzing participants' interactions and feedback, we found both task completion time and total interactions to be similar across interfaces and tasks, as well as unique integration strategies between interfaces and emergent behaviors related to satisficing and cognitive bias. Participants' time spent and interactions revealed that in-situ integration enabled users to spend more time on analysis tasks compared with ex-situ integration. Participants' integration strategies and analytical behaviors revealed differences in interface usage for generating and tracking hypotheses and insights. With these results, we synthesized preliminary guidelines for designing future visual analytics interfaces that can support integrating attributes throughout an active analysis process.

IEEE VIS 2024 Content: Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Room: To Be Announced

Keywords

Topological data analysis, ensemble data, persistence diagrams, merge trees, auto-encoders, neural networks

Abstract

This paper presents a computational framework for the Wasserstein auto-encoding of merge trees (MT-WAE), a novel extension of the classical auto-encoder neural network architecture to the Wasserstein metric space of merge trees. In contrast to traditional auto-encoders which operate on vectorized data, our formulation explicitly manipulates merge trees on their associated metric space at each layer of the network, resulting in superior accuracy and interpretability. Our novel neural network approach can be interpreted as a non-linear generalization of previous linear attempts [79] at merge tree encoding. It also trivially extends to persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our algorithms, with MT-WAE computations in the orders of minutes on average. We show the utility of our contributions in two applications adapted from previous work on merge tree encoding [79]. First, we apply MT-WAE to merge tree compression, by concisely representing them with their coordinates in the final layer of our auto-encoder. Second, we document an application to dimensionality reduction, by exploiting the latent space of our auto-encoder, for the visual analysis of ensemble data. We illustrate the versatility of our framework by introducing two penalty terms, to help preserve in the latent space both the Wasserstein distances between merge trees, as well as their clusters. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used for reproducibility.

IEEE VIS 2024 Content: Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Mathieu Pont -

Julien Tierny -

Room: To Be Announced

Keywords

Topological data analysis, ensemble data, persistence diagrams, merge trees, auto-encoders, neural networks

Abstract

This paper presents a computational framework for the Wasserstein auto-encoding of merge trees (MT-WAE), a novel extension of the classical auto-encoder neural network architecture to the Wasserstein metric space of merge trees. In contrast to traditional auto-encoders which operate on vectorized data, our formulation explicitly manipulates merge trees on their associated metric space at each layer of the network, resulting in superior accuracy and interpretability. Our novel neural network approach can be interpreted as a non-linear generalization of previous linear attempts [79] at merge tree encoding. It also trivially extends to persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our algorithms, with MT-WAE computations in the orders of minutes on average. We show the utility of our contributions in two applications adapted from previous work on merge tree encoding [79]. First, we apply MT-WAE to merge tree compression, by concisely representing them with their coordinates in the final layer of our auto-encoder. Second, we document an application to dimensionality reduction, by exploiting the latent space of our auto-encoder, for the visual analysis of ensemble data. We illustrate the versatility of our framework by introducing two penalty terms, to help preserve in the latent space both the Wasserstein distances between merge trees, as well as their clusters. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used for reproducibility.

IEEE VIS 2024 Content: Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Room: To Be Announced

Keywords

Data visualization, Three-dimensional displays, Virtual reality, Mixed reality, Electronic mail, Syntactics, Semantics

Abstract

This article explores how the ability to recall information in data visualizations depends on the presentation technology. Participants viewed 10 Isotype visualizations on a 2D screen, in 3D, in Virtual Reality (VR) and in Mixed Reality (MR). To provide a fair comparison between the three 3D conditions, we used LIDAR to capture the details of the physical rooms, and used this information to create our textured 3D models. For all environments, we measured the number of visualizations recalled and their order (2D) or spatial location (3D, VR, MR). We also measured the number of syntactic and semantic features recalled. Results of our study show increased recall and greater richness of data understanding in the MR condition. Not only did participants recall more visualizations and ordinal/spatial positions in MR, but they also remembered more details about graph axes and data mappings, and more information about the shape of the data. We discuss how differences in the spatial and kinesthetic cues provided in these different environments could contribute to these results, and reasons why we did not observe comparable performance in the 3D and VR conditions.

IEEE VIS 2024 Content: Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Christophe Hurter -

Bernice Rogowitz -

Guillaume Truong -

Tiffany Andry -

Hugo Romat -

Ludovic Gardy -

Fereshteh Amini -

Nathalie Henry Riche -

Room: To Be Announced

Keywords

Data visualization, Three-dimensional displays, Virtual reality, Mixed reality, Electronic mail, Syntactics, Semantics

Abstract

This article explores how the ability to recall information in data visualizations depends on the presentation technology. Participants viewed 10 Isotype visualizations on a 2D screen, in 3D, in Virtual Reality (VR) and in Mixed Reality (MR). To provide a fair comparison between the three 3D conditions, we used LIDAR to capture the details of the physical rooms, and used this information to create our textured 3D models. For all environments, we measured the number of visualizations recalled and their order (2D) or spatial location (3D, VR, MR). We also measured the number of syntactic and semantic features recalled. Results of our study show increased recall and greater richness of data understanding in the MR condition. Not only did participants recall more visualizations and ordinal/spatial positions in MR, but they also remembered more details about graph axes and data mappings, and more information about the shape of the data. We discuss how differences in the spatial and kinesthetic cues provided in these different environments could contribute to these results, and reasons why we did not observe comparable performance in the 3D and VR conditions.

IEEE VIS 2024 Content: Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study

Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study

Room: To Be Announced

Keywords

Data Visualization, Design Study, Network-on-Chip, Performance Analysis

Abstract

Visualization design studies bring together visualization researchers and domain experts to address yet unsolved data analysis challenges stemming from the needs of the domain experts. Typically, the visualization researchers lead the design study process and implementation of any visualization solutions. This setup leverages the visualization researchers' knowledge of methodology, design, and programming, but the availability to synchronize with the domain experts can hamper the design process. We consider an alternative setup where the domain experts take the lead in the design study, supported by the visualization experts. In this study, the domain experts are computer architecture experts who simulate and analyze novel computer chip designs. These chips rely on a Network-on-Chip (NOC) to connect components. The experts want to understand how the chip designs perform and what in the design led to their performance. To aid this analysis, we develop Vis4Mesh, a visualization system that provides spatial, temporal, and architectural context to simulated NOC behavior. Integration with an existing computer architecture visualization tool enables architects to perform deep-dives into specific architecture component behavior. We validate Vis4Mesh through a case study and a user study with computer architecture researchers. We reflect on our design and process, discussing advantages, disadvantages, and guidance for engaging in a domain expert-led design studies.

IEEE VIS 2024 Content: Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study

Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study

Shaoyu Wang -

Hang Yan -

Katherine E. Isaacs -

Yifan Sun -

Room: To Be Announced

Keywords

Data Visualization, Design Study, Network-on-Chip, Performance Analysis

Abstract

Visualization design studies bring together visualization researchers and domain experts to address yet unsolved data analysis challenges stemming from the needs of the domain experts. Typically, the visualization researchers lead the design study process and implementation of any visualization solutions. This setup leverages the visualization researchers' knowledge of methodology, design, and programming, but the availability to synchronize with the domain experts can hamper the design process. We consider an alternative setup where the domain experts take the lead in the design study, supported by the visualization experts. In this study, the domain experts are computer architecture experts who simulate and analyze novel computer chip designs. These chips rely on a Network-on-Chip (NOC) to connect components. The experts want to understand how the chip designs perform and what in the design led to their performance. To aid this analysis, we develop Vis4Mesh, a visualization system that provides spatial, temporal, and architectural context to simulated NOC behavior. Integration with an existing computer architecture visualization tool enables architects to perform deep-dives into specific architecture component behavior. We validate Vis4Mesh through a case study and a user study with computer architecture researchers. We reflect on our design and process, discussing advantages, disadvantages, and guidance for engaging in a domain expert-led design studies.

IEEE VIS 2024 Content: A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

Room: To Be Announced

Keywords

Visual analytics, Measurement, Size measurement, Windows, Time measurement, Data visualization, Task analysis, Visual analytics, Dynamic networks, Temporal network motifs, Interactive network slicing

Abstract

Partitioning a dynamic network into subsets (i.e., snapshots) based on disjoint time intervals is a widely used technique for understanding how structural patterns of the network evolve. However, selecting an appropriate time window (i.e., slicing a dynamic network into snapshots) is challenging and time-consuming, often involving a trial-and-error approach to investigating underlying structural patterns. To address this challenge, we present MoNetExplorer, a novel interactive visual analytics system that leverages temporal network motifs to provide recommendations for window sizes and support users in visually comparing different slicing results. MoNetExplorer provides a comprehensive analysis based on window size, including (1) a temporal overview to identify the structural information, (2) temporal network motif composition, and (3) node-link-diagram-based details to enable users to identify and understand structural patterns at various temporal resolutions. To demonstrate the effectiveness of our system, we conducted a case study with network researchers using two real-world dynamic network datasets. Our case studies show that the system effectively supports users to gain valuable insights into the temporal and structural aspects of dynamic networks.

IEEE VIS 2024 Content: A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

Seokweon Jung -

DongHwa Shin -

Hyeon Jeon -

Kiroong Choe -

Jinwook Seo -

Room: To Be Announced

Keywords

Visual analytics, Measurement, Size measurement, Windows, Time measurement, Data visualization, Task analysis, Visual analytics, Dynamic networks, Temporal network motifs, Interactive network slicing

Abstract

Partitioning a dynamic network into subsets (i.e., snapshots) based on disjoint time intervals is a widely used technique for understanding how structural patterns of the network evolve. However, selecting an appropriate time window (i.e., slicing a dynamic network into snapshots) is challenging and time-consuming, often involving a trial-and-error approach to investigating underlying structural patterns. To address this challenge, we present MoNetExplorer, a novel interactive visual analytics system that leverages temporal network motifs to provide recommendations for window sizes and support users in visually comparing different slicing results. MoNetExplorer provides a comprehensive analysis based on window size, including (1) a temporal overview to identify the structural information, (2) temporal network motif composition, and (3) node-link-diagram-based details to enable users to identify and understand structural patterns at various temporal resolutions. To demonstrate the effectiveness of our system, we conducted a case study with network researchers using two real-world dynamic network datasets. Our case studies show that the system effectively supports users to gain valuable insights into the temporal and structural aspects of dynamic networks.

IEEE VIS 2024 Content: InVADo: Interactive Visual Analysis of Molecular Docking Data

InVADo: Interactive Visual Analysis of Molecular Docking Data

Room: To Be Announced

Keywords

Molecular Docking, AutoDock, Virtual Screening, Visual Analysis, Visualization, Clustering, Protein-Ligand Interaction.

Abstract

Molecular docking is a key technique in various fields like structural biology, medicinal chemistry, and biotechnology. It is widely used for virtual screening during drug discovery, computer-assisted drug design, and protein engineering. A general molecular docking process consists of the target and ligand selection, their preparation, and the docking process itself, followed by the evaluation of the results. However, the most commonly used docking software provides no or very basic evaluation possibilities. Scripting and external molecular viewers are often used, which are not designed for an efficient analysis of docking results. Therefore, we developed InVADo, a comprehensive interactive visual analysis tool for large docking data. It consists of multiple linked 2D and 3D views. It filters and spatially clusters the data, and enriches it with post-docking analysis results of protein-ligand interactions and functional groups, to enable well-founded decision-making. In an exemplary case study, domain experts confirmed that InVADo facilitates and accelerates the analysis workflow. They rated it as a convenient, comprehensive, and feature-rich tool, especially useful for virtual screening.

IEEE VIS 2024 Content: InVADo: Interactive Visual Analysis of Molecular Docking Data

InVADo: Interactive Visual Analysis of Molecular Docking Data

Marco Schäfer -

Nicolas Brich -

Jan Byška -

Sérgio M. Marques -

David Bednář -

Philipp Thiel -

Barbora Kozlíková -

Michael Krone -

Room: To Be Announced

Keywords

Molecular Docking, AutoDock, Virtual Screening, Visual Analysis, Visualization, Clustering, Protein-Ligand Interaction.

Abstract

Molecular docking is a key technique in various fields like structural biology, medicinal chemistry, and biotechnology. It is widely used for virtual screening during drug discovery, computer-assisted drug design, and protein engineering. A general molecular docking process consists of the target and ligand selection, their preparation, and the docking process itself, followed by the evaluation of the results. However, the most commonly used docking software provides no or very basic evaluation possibilities. Scripting and external molecular viewers are often used, which are not designed for an efficient analysis of docking results. Therefore, we developed InVADo, a comprehensive interactive visual analysis tool for large docking data. It consists of multiple linked 2D and 3D views. It filters and spatially clusters the data, and enriches it with post-docking analysis results of protein-ligand interactions and functional groups, to enable well-founded decision-making. In an exemplary case study, domain experts confirmed that InVADo facilitates and accelerates the analysis workflow. They rated it as a convenient, comprehensive, and feature-rich tool, especially useful for virtual screening.

IEEE VIS 2024 Content: The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

Room: To Be Announced

Keywords

Visualization, text, annotation, perceived bias, judgment, prediction

Abstract

This paper investigates the role of text in visualizations, specifically the impact of text position, semantic content, and biased wording. Two empirical studies were conducted based on two tasks (predicting data trends and appraising bias) using two visualization types (bar and line charts). While the addition of text had a minimal effect on how people perceive data trends, there was a significant impact on how biased they perceive the authors to be. This finding revealed a relationship between the degree of bias in textual information and the perception of the authors' bias. Exploratory analyses support an interaction between a person's prediction and the degree of bias they perceived. This paper also develops a crowdsourced method for creating chart annotations that range from neutral to highly biased. This research highlights the need for designers to mitigate potential polarization of readers' opinions based on how authors' ideas are expressed.

IEEE VIS 2024 Content: The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

Chase Stokes -

Cindy Xiong Bearfield -

Marti Hearst -

Room: To Be Announced

Keywords

Visualization, text, annotation, perceived bias, judgment, prediction

Abstract

This paper investigates the role of text in visualizations, specifically the impact of text position, semantic content, and biased wording. Two empirical studies were conducted based on two tasks (predicting data trends and appraising bias) using two visualization types (bar and line charts). While the addition of text had a minimal effect on how people perceive data trends, there was a significant impact on how biased they perceive the authors to be. This finding revealed a relationship between the degree of bias in textual information and the perception of the authors' bias. Exploratory analyses support an interaction between a person's prediction and the degree of bias they perceived. This paper also develops a crowdsourced method for creating chart annotations that range from neutral to highly biased. This research highlights the need for designers to mitigate potential polarization of readers' opinions based on how authors' ideas are expressed.

IEEE VIS 2024 Content: VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

Room: To Be Announced

Keywords

Adaptive Visualization, Situated Visualization, Augmented Reality, Volume Rendering

Abstract

We present VoxAR, a method to facilitate an effective visualization of volume-rendered objects in optical see-through head-mounted displays (OST-HMDs). The potential of augmented reality (AR) to integrate digital information into the physical world provides new opportunities for visualizing and interpreting scientific data. However, a limitation of OST-HMD technology is that rendered pixels of a virtual object can interfere with the colors of the real-world, making it challenging to perceive the augmented virtual information accurately. We address this challenge in a two-step approach. First, VoxAR determines an appropriate placement of the volume-rendered object in the real-world scene by evaluating a set of spatial and environmental objectives, managed as user-selected preferences and pre-defined constraints. We achieve a real-time solution by implementing the objectives using a GPU shader language. Next, VoxAR adjusts the colors of the input transfer function (TF) based on the real-world placement region. Specifically, we introduce a novel optimization method that adjusts the TF colors such that the resulting volume-rendered pixels are discernible against the background and the TF maintains the perceptual mapping between the colors and data intensity values. Finally, we present an assessment of our approach through objective evaluations and subjective user studies.

IEEE VIS 2024 Content: VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

Saeed Boorboor -

Matthew S. Castellana -

Yoonsang Kim -

Zhutian Chen -

Johanna Beyer -

Hanspeter Pfister -

Arie E. Kaufman -

Room: To Be Announced

Keywords

Adaptive Visualization, Situated Visualization, Augmented Reality, Volume Rendering

Abstract

We present VoxAR, a method to facilitate an effective visualization of volume-rendered objects in optical see-through head-mounted displays (OST-HMDs). The potential of augmented reality (AR) to integrate digital information into the physical world provides new opportunities for visualizing and interpreting scientific data. However, a limitation of OST-HMD technology is that rendered pixels of a virtual object can interfere with the colors of the real-world, making it challenging to perceive the augmented virtual information accurately. We address this challenge in a two-step approach. First, VoxAR determines an appropriate placement of the volume-rendered object in the real-world scene by evaluating a set of spatial and environmental objectives, managed as user-selected preferences and pre-defined constraints. We achieve a real-time solution by implementing the objectives using a GPU shader language. Next, VoxAR adjusts the colors of the input transfer function (TF) based on the real-world placement region. Specifically, we introduce a novel optimization method that adjusts the TF colors such that the resulting volume-rendered pixels are discernible against the background and the TF maintains the perceptual mapping between the colors and data intensity values. Finally, we present an assessment of our approach through objective evaluations and subjective user studies.

IEEE VIS 2024 Content: Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Room: To Be Announced

Keywords

Data visualization, Sports, Videos, Probes, Surveys, Authoring systems, Games, Design framework, Embedded visualization, Sports analytics, Visualization in motion

Abstract

We report on challenges and considerations for supporting design processes for visualizations in motion embedded in sports videos. We derive our insights from analyzing swimming race visualizations and motion-related data, building a technology probe, as well as a study with designers. Understanding how to design situated visualizations in motion is important for a variety of contexts. Competitive sports coverage, in particular, increasingly includes information on athlete or team statistics and records. Although moving visual representations attached to athletes or other targets are starting to appear, systematic investigations on how to best support their design process in the context of sports videos are still missing. Our work makes several contributions in identifying opportunities for visualizations to be added to swimming competition coverage but, most importantly, in identifying requirements and challenges for designing situated visualizations in motion. Our investigations include the analysis of a survey with swimming enthusiasts on their motion-related information needs, an ideation workshop to collect designs and elicit design challenges, the design of a technology probe that allows to create embedded visualizations in motion based on real data, and an evaluation with visualization designers that aimed to understand the benefits of designing directly on videos.

IEEE VIS 2024 Content: Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Lijie Yao -

Romain Vuillemot -

Anastasia Bezerianos -

Petra Isenberg -

Room: To Be Announced

Keywords

Data visualization, Sports, Videos, Probes, Surveys, Authoring systems, Games, Design framework, Embedded visualization, Sports analytics, Visualization in motion

Abstract

We report on challenges and considerations for supporting design processes for visualizations in motion embedded in sports videos. We derive our insights from analyzing swimming race visualizations and motion-related data, building a technology probe, as well as a study with designers. Understanding how to design situated visualizations in motion is important for a variety of contexts. Competitive sports coverage, in particular, increasingly includes information on athlete or team statistics and records. Although moving visual representations attached to athletes or other targets are starting to appear, systematic investigations on how to best support their design process in the context of sports videos are still missing. Our work makes several contributions in identifying opportunities for visualizations to be added to swimming competition coverage but, most importantly, in identifying requirements and challenges for designing situated visualizations in motion. Our investigations include the analysis of a survey with swimming enthusiasts on their motion-related information needs, an ideation workshop to collect designs and elicit design challenges, the design of a technology probe that allows to create embedded visualizations in motion based on real data, and an evaluation with visualization designers that aimed to understand the benefits of designing directly on videos.

IEEE VIS 2024 Content: Interactive Reweighting for Mitigating Label Quality Issues

Interactive Reweighting for Mitigating Label Quality Issues

Room: To Be Announced

Abstract

Label quality issues, such as noisy labels and imbalanced class distributions, have negative effects on model performance. Automatic reweighting methods identify problematic samples with label quality issues by recognizing their negative effects on validation samples and assigning lower weights to them. However, these methods fail to achieve satisfactory performance when the validation samples are of low quality. To tackle this, we develop Reweighter, a visual analysis tool for sample reweighting. The reweighting relationships between validation samples and training samples are modeled as a bipartite graph. Based on this graph, a validation sample improvement method is developed to improve the quality of validation samples. Since the automatic improvement may not always be perfect, a co-cluster-based bipartite graph visualization is developed to illustrate the reweighting relationships and support the interactive adjustments to validation samples and reweighting results. The adjustments are converted into the constraints of the validation sample improvement method to further improve validation samples. We demonstrate the effectiveness of Reweighter in improving reweighting results through quantitative evaluation and two case studies.

IEEE VIS 2024 Content: Interactive Reweighting for Mitigating Label Quality Issues

Interactive Reweighting for Mitigating Label Quality Issues

Weikai Yang -

Yukai Guo -

Jing Wu -

Zheng Wang -

Lan-Zhe Guo -

Yu-Feng Li -

Shixia Liu -

Room: To Be Announced

Abstract

Label quality issues, such as noisy labels and imbalanced class distributions, have negative effects on model performance. Automatic reweighting methods identify problematic samples with label quality issues by recognizing their negative effects on validation samples and assigning lower weights to them. However, these methods fail to achieve satisfactory performance when the validation samples are of low quality. To tackle this, we develop Reweighter, a visual analysis tool for sample reweighting. The reweighting relationships between validation samples and training samples are modeled as a bipartite graph. Based on this graph, a validation sample improvement method is developed to improve the quality of validation samples. Since the automatic improvement may not always be perfect, a co-cluster-based bipartite graph visualization is developed to illustrate the reweighting relationships and support the interactive adjustments to validation samples and reweighting results. The adjustments are converted into the constraints of the validation sample improvement method to further improve validation samples. We demonstrate the effectiveness of Reweighter in improving reweighting results through quantitative evaluation and two case studies.

IEEE VIS 2024 Content: KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation

KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation

Room: To Be Announced

Keywords

Time-varying data compression, implicit neural representation, knowledge distillation, volume visualization.

Abstract

Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for compressing large-scale time-varying data. The approach consists of two stages: spatial compression and model aggregation. In the first stage, each time step is compressed using an implicit neural representation with bottleneck layers and features of interest preservation-based sampling. In the second stage, we utilize an offline knowledge distillation algorithm to extract knowledge from the trained models and aggregate it into a single model. We evaluated our approach on a variety of time-varying volumetric data sets. Both quantitative and qualitative results, such as PSNR, LPIPS, and rendered images, demonstrate that KD-INR surpasses the state-of-the-art approaches, including learning-based (i.e., CoordNet, NeurComp, and SIREN) and lossy compression (i.e., SZ3, ZFP, and TTHRESH) methods, at various compression ratios ranging from hundreds to ten thousand.

IEEE VIS 2024 Content: KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation

KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation

Jun Han -

Hao Zheng -

Change Bi -

Room: To Be Announced

Keywords

Time-varying data compression, implicit neural representation, knowledge distillation, volume visualization.

Abstract

Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for compressing large-scale time-varying data. The approach consists of two stages: spatial compression and model aggregation. In the first stage, each time step is compressed using an implicit neural representation with bottleneck layers and features of interest preservation-based sampling. In the second stage, we utilize an offline knowledge distillation algorithm to extract knowledge from the trained models and aggregate it into a single model. We evaluated our approach on a variety of time-varying volumetric data sets. Both quantitative and qualitative results, such as PSNR, LPIPS, and rendered images, demonstrate that KD-INR surpasses the state-of-the-art approaches, including learning-based (i.e., CoordNet, NeurComp, and SIREN) and lossy compression (i.e., SZ3, ZFP, and TTHRESH) methods, at various compression ratios ranging from hundreds to ten thousand.

IEEE VIS 2024 Content: Decoupling Judgment and Decision Making: A Tale of Two Tails

Decoupling Judgment and Decision Making: A Tale of Two Tails

Room: To Be Announced

Keywords

Data visualization, Task analysis, Decision making, Visualization, Bars, Sports, Terminology, Cognition, Decision Making, Judgment, Psychology, Visualization

Abstract

Is it true that if citizens understand hurricane probabilities, they will make more rational decisions for evacuation? Finding answers to such questions is not straightforward in the literature because the terms “ judgment ” and “ decision making ” are often used interchangeably. This terminology conflation leads to a lack of clarity on whether people make suboptimal decisions because of inaccurate judgments of information conveyed in visualizations or because they use alternative yet currently unknown heuristics. To decouple judgment from decision making, we review relevant concepts from the literature and present two preregistered experiments (N=601) to investigate if the task (judgment vs. decision making), the scenario (sports vs. humanitarian), and the visualization (quantile dotplots, density plots, probability bars) affect accuracy. While experiment 1 was inconclusive, we found evidence for a difference in experiment 2. Contrary to our expectations and previous research, which found decisions less accurate than their direct-equivalent judgments, our results pointed in the opposite direction. Our findings further revealed that decisions were less vulnerable to status-quo bias, suggesting decision makers may disfavor responses associated with inaction. We also found that both scenario and visualization types can influence people's judgments and decisions. Although effect sizes are not large and results should be interpreted carefully, we conclude that judgments cannot be safely used as proxy tasks for decision making, and discuss implications for visualization research and beyond. Materials and preregistrations are available at https://osf.io/ufzp5/?view_only=adc0f78a23804c31bf7fdd9385cb264f.

IEEE VIS 2024 Content: Decoupling Judgment and Decision Making: A Tale of Two Tails

Decoupling Judgment and Decision Making: A Tale of Two Tails

Başak Oral -

Pierre Dragicevic -

Alexandru Telea -

Evanthia Dimara -

Room: To Be Announced

Keywords

Data visualization, Task analysis, Decision making, Visualization, Bars, Sports, Terminology, Cognition, Decision Making, Judgment, Psychology, Visualization

Abstract

Is it true that if citizens understand hurricane probabilities, they will make more rational decisions for evacuation? Finding answers to such questions is not straightforward in the literature because the terms “ judgment ” and “ decision making ” are often used interchangeably. This terminology conflation leads to a lack of clarity on whether people make suboptimal decisions because of inaccurate judgments of information conveyed in visualizations or because they use alternative yet currently unknown heuristics. To decouple judgment from decision making, we review relevant concepts from the literature and present two preregistered experiments (N=601) to investigate if the task (judgment vs. decision making), the scenario (sports vs. humanitarian), and the visualization (quantile dotplots, density plots, probability bars) affect accuracy. While experiment 1 was inconclusive, we found evidence for a difference in experiment 2. Contrary to our expectations and previous research, which found decisions less accurate than their direct-equivalent judgments, our results pointed in the opposite direction. Our findings further revealed that decisions were less vulnerable to status-quo bias, suggesting decision makers may disfavor responses associated with inaction. We also found that both scenario and visualization types can influence people's judgments and decisions. Although effect sizes are not large and results should be interpreted carefully, we conclude that judgments cannot be safely used as proxy tasks for decision making, and discuss implications for visualization research and beyond. Materials and preregistrations are available at https://osf.io/ufzp5/?view_only=adc0f78a23804c31bf7fdd9385cb264f.

IEEE VIS 2024 Content: A Survey on Progressive Visualization

A Survey on Progressive Visualization

Room: To Be Announced

Keywords

Data visualization, Convergence, Visual analytics, Taxonomy Surveys, Rendering (computer graphics), Task analysis, Progressive Visual Analytics, Progressive Visualization, Taxonomy, State-of-the-Art Report, Survey

Abstract

Currently, growing data sources and long-running algorithms impede user attention and interaction with visual analytics applications. Progressive visualization (PV) and visual analytics (PVA) alleviate this problem by allowing immediate feedback and interaction with large datasets and complex computations, avoiding waiting for complete results by using partial results improving with time. Yet, creating a progressive visualization requires more effort than a regular visualization but also opens up new possibilities, such as steering the computations towards more relevant parts of the data, thus saving computational resources. However, there is currently no comprehensive overview of the design space for progressive visualization systems. We surveyed the related work of PV and derived a new taxonomy for progressive visualizations by systematically categorizing all PV publications that included visualizations with progressive features. Progressive visualizations can be categorized by well-known visualization taxonomies, but we also found that progressive visualizations can be distinguished by the way they manage their data processing, data domain, and visual update. Furthermore, we identified key properties such as uncertainty, steering, visual stability, and real-time processing that are significantly different with progressive applications. We also collected evaluation methodologies reported by the publications and conclude with statistical findings, research gaps, and open challenges. A continuously updated visual browser of the survey data is available at visualsurvey.net/pva.

IEEE VIS 2024 Content: A Survey on Progressive Visualization

A Survey on Progressive Visualization

Alex Ulmer -

Marco Angelini -

Jean-Daniel Fekete -

Jörn Kohlhammerm -

Thorsten May -

Room: To Be Announced

Keywords

Data visualization, Convergence, Visual analytics, Taxonomy Surveys, Rendering (computer graphics), Task analysis, Progressive Visual Analytics, Progressive Visualization, Taxonomy, State-of-the-Art Report, Survey

Abstract

Currently, growing data sources and long-running algorithms impede user attention and interaction with visual analytics applications. Progressive visualization (PV) and visual analytics (PVA) alleviate this problem by allowing immediate feedback and interaction with large datasets and complex computations, avoiding waiting for complete results by using partial results improving with time. Yet, creating a progressive visualization requires more effort than a regular visualization but also opens up new possibilities, such as steering the computations towards more relevant parts of the data, thus saving computational resources. However, there is currently no comprehensive overview of the design space for progressive visualization systems. We surveyed the related work of PV and derived a new taxonomy for progressive visualizations by systematically categorizing all PV publications that included visualizations with progressive features. Progressive visualizations can be categorized by well-known visualization taxonomies, but we also found that progressive visualizations can be distinguished by the way they manage their data processing, data domain, and visual update. Furthermore, we identified key properties such as uncertainty, steering, visual stability, and real-time processing that are significantly different with progressive applications. We also collected evaluation methodologies reported by the publications and conclude with statistical findings, research gaps, and open challenges. A continuously updated visual browser of the survey data is available at visualsurvey.net/pva.

IEEE VIS 2024 Content: KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

Room: To Be Announced

Keywords

Visual analytics, language models, prompting, interpretability, machine learning.

Abstract

Recent growth in the popularity of large language models has led to their increased usage for summarizing, predicting, and generating text, making it vital to help researchers and engineers understand how and why they work. We present KnowledgeVIS , a human-in-the-loop visual analytics system for interpreting language models using fill-in-the-blank sentences as prompts. By comparing predictions between sentences, KnowledgeVIS reveals learned associations that intuitively connect what language models learn during training to natural language tasks downstream, helping users create and test multiple prompt variations, analyze predicted words using a novel semantic clustering technique, and discover insights using interactive visualizations. Collectively, these visualizations help users identify the likelihood and uniqueness of individual predictions, compare sets of predictions between prompts, and summarize patterns and relationships between predictions across all prompts. We demonstrate the capabilities of KnowledgeVIS with feedback from six NLP experts as well as three different use cases: (1) probing biomedical knowledge in two domain-adapted models; and (2) evaluating harmful identity stereotypes and (3) discovering facts and relationships between three general-purpose models.

IEEE VIS 2024 Content: KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

Adam Coscia -

Alex Endert -

Room: To Be Announced

Keywords

Visual analytics, language models, prompting, interpretability, machine learning.

Abstract

Recent growth in the popularity of large language models has led to their increased usage for summarizing, predicting, and generating text, making it vital to help researchers and engineers understand how and why they work. We present KnowledgeVIS , a human-in-the-loop visual analytics system for interpreting language models using fill-in-the-blank sentences as prompts. By comparing predictions between sentences, KnowledgeVIS reveals learned associations that intuitively connect what language models learn during training to natural language tasks downstream, helping users create and test multiple prompt variations, analyze predicted words using a novel semantic clustering technique, and discover insights using interactive visualizations. Collectively, these visualizations help users identify the likelihood and uniqueness of individual predictions, compare sets of predictions between prompts, and summarize patterns and relationships between predictions across all prompts. We demonstrate the capabilities of KnowledgeVIS with feedback from six NLP experts as well as three different use cases: (1) probing biomedical knowledge in two domain-adapted models; and (2) evaluating harmful identity stereotypes and (3) discovering facts and relationships between three general-purpose models.

IEEE VIS 2024 Content: Inclusion Depth for Contour Ensembles

Inclusion Depth for Contour Ensembles

Room: To Be Announced

Keywords

Uncertainty visualization, contours, ensemble summarization, depth statistics.

Abstract

Ensembles of contours arise in various applications like simulation, computer-aided design, and semantic segmentation. Uncovering ensemble patterns and analyzing individual members is a challenging task that suffers from clutter. Ensemble statistical summarization can alleviate this issue by permitting analyzing ensembles' distributional components like the mean and median, confidence intervals, and outliers. Contour boxplots, powered by Contour Band Depth (CBD), are a popular non-parametric ensemble summarization method that benefits from CBD's generality, robustness, and theoretical properties. In this work, we introduce Inclusion Depth (ID), a new notion of contour depth with three defining characteristics. First, ID is a generalization of functional Half-Region Depth, which offers several theoretical guarantees. Second, ID relies on a simple principle: the inside/outside relationships between contours. This facilitates implementing ID and understanding its results. Third, the computational complexity of ID scales quadratically in the number of members of the ensemble, improving CBD's cubic complexity. This also in practice speeds up the computation enabling the use of ID for exploring large contour ensembles or in contexts requiring multiple depth evaluations like clustering. In a series of experiments on synthetic data and case studies with meteorological and segmentation data, we evaluate ID's performance and demonstrate its capabilities for the visual analysis of contour ensembles.

IEEE VIS 2024 Content: Inclusion Depth for Contour Ensembles

Inclusion Depth for Contour Ensembles

Nicolas F. Chaves-de-Plaza -

Prerak Mody -

Marius Staring -

René van Egmond -

Anna Vilanova -

Klaus Hildebrandt -

Room: To Be Announced

Keywords

Uncertainty visualization, contours, ensemble summarization, depth statistics.

Abstract

Ensembles of contours arise in various applications like simulation, computer-aided design, and semantic segmentation. Uncovering ensemble patterns and analyzing individual members is a challenging task that suffers from clutter. Ensemble statistical summarization can alleviate this issue by permitting analyzing ensembles' distributional components like the mean and median, confidence intervals, and outliers. Contour boxplots, powered by Contour Band Depth (CBD), are a popular non-parametric ensemble summarization method that benefits from CBD's generality, robustness, and theoretical properties. In this work, we introduce Inclusion Depth (ID), a new notion of contour depth with three defining characteristics. First, ID is a generalization of functional Half-Region Depth, which offers several theoretical guarantees. Second, ID relies on a simple principle: the inside/outside relationships between contours. This facilitates implementing ID and understanding its results. Third, the computational complexity of ID scales quadratically in the number of members of the ensemble, improving CBD's cubic complexity. This also in practice speeds up the computation enabling the use of ID for exploring large contour ensembles or in contexts requiring multiple depth evaluations like clustering. In a series of experiments on synthetic data and case studies with meteorological and segmentation data, we evaluate ID's performance and demonstrate its capabilities for the visual analysis of contour ensembles.

IEEE VIS 2024 Content: Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Room: To Be Announced

Keywords

Exploratory Data Analysis, Interactive Data Analysis, Computational Notebooks, Hybrid Visualization-Scripting, Visualization Design

Abstract

Interactive visualization can support fluid exploration but is often limited to predetermined tasks. Scripting can support a vast range of queries but may be more cumbersome for free-form exploration. Embedding interactive visualization in scripting environments, such as computational notebooks, provides an opportunity to leverage the strengths of both direct manipulation and scripting. We investigate interactive visualization design methodology, choices, and strategies under this paradigm through a design study of calling context trees used in performance analysis, a field which exemplifies typical exploratory data analysis workflows with big data and hard to define problems. We first produce a formal task analysis assigning tasks to graphical or scripting contexts based on their specificity, frequency, and suitability. We then design a notebook-embedded interactive visualization and validate it with intended users. In a follow-up study, we present participants with multiple graphical and scripting interaction modes to elicit feedback about notebook-embedded visualization design, finding consensus in support of the interaction model. We report and reflect on observations regarding the process and design implications for combining visualization and scripting in notebooks.

IEEE VIS 2024 Content: Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Connor Scully-Allison -

Ian Lumsden -

Katy Williams -

Jesse Bartels -

Michela Taufer -

Stephanie Brink -

Abhinav Bhatele -

Olga Pearce -

Katherine E. Isaacs -

Room: To Be Announced

Keywords

Exploratory Data Analysis, Interactive Data Analysis, Computational Notebooks, Hybrid Visualization-Scripting, Visualization Design

Abstract

Interactive visualization can support fluid exploration but is often limited to predetermined tasks. Scripting can support a vast range of queries but may be more cumbersome for free-form exploration. Embedding interactive visualization in scripting environments, such as computational notebooks, provides an opportunity to leverage the strengths of both direct manipulation and scripting. We investigate interactive visualization design methodology, choices, and strategies under this paradigm through a design study of calling context trees used in performance analysis, a field which exemplifies typical exploratory data analysis workflows with big data and hard to define problems. We first produce a formal task analysis assigning tasks to graphical or scripting contexts based on their specificity, frequency, and suitability. We then design a notebook-embedded interactive visualization and validate it with intended users. In a follow-up study, we present participants with multiple graphical and scripting interaction modes to elicit feedback about notebook-embedded visualization design, finding consensus in support of the interaction model. We report and reflect on observations regarding the process and design implications for combining visualization and scripting in notebooks.

IEEE VIS 2024 Content: The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization

The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization

Room: To Be Announced

Keywords

Data Visualization, Market Research, Visualization, Uncertainty, Data Models, Correlation, Attitude Control, Belief Elicitation, Visual Elicitation, Data Visualization, Contrasting Narratives

Abstract

News articles containing data visualizations play an important role in informing the public on issues ranging from public health to politics. Recent research on the persuasive appeal of data visualizations suggests that prior attitudes can be notoriously difficult to change. Inspired by an NYT article, we designed two experiments to evaluate the impact of elicitation and contrasting narratives on attitude change, recall, and engagement. We hypothesized that eliciting prior beliefs leads to more elaborative thinking that ultimately results in higher attitude change, better recall, and engagement. Our findings revealed that visual elicitation leads to higher engagement in terms of feelings of surprise. While there is an overall attitude change across all experiment conditions, we did not observe a significant effect of belief elicitation on attitude change. With regard to recall error, while participants in the draw trend elicitation exhibited significantly lower recall error than participants in the categorize trend condition, we found no significant difference in recall error when comparing elicitation conditions to no elicitation. In a follow-up study, we added contrasting narratives with the purpose of making the main visualization (communicating data on the focal issue) appear strikingly different. Compared to the results of Study 1, we found that contrasting narratives improved engagement in terms of surprise and interest but interestingly resulted in higher recall error and no significant change in attitude. We discuss the effects of elicitation and contrasting narratives in the context of topic involvement and the strengths of temporal trends encoded in the data visualization.

IEEE VIS 2024 Content: The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization

The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization

Milad Rogha -

Subham Sah -

Alireza Karduni -

Douglas Markant -

Wenwen Dou -

Room: To Be Announced

Keywords

Data Visualization, Market Research, Visualization, Uncertainty, Data Models, Correlation, Attitude Control, Belief Elicitation, Visual Elicitation, Data Visualization, Contrasting Narratives

Abstract

News articles containing data visualizations play an important role in informing the public on issues ranging from public health to politics. Recent research on the persuasive appeal of data visualizations suggests that prior attitudes can be notoriously difficult to change. Inspired by an NYT article, we designed two experiments to evaluate the impact of elicitation and contrasting narratives on attitude change, recall, and engagement. We hypothesized that eliciting prior beliefs leads to more elaborative thinking that ultimately results in higher attitude change, better recall, and engagement. Our findings revealed that visual elicitation leads to higher engagement in terms of feelings of surprise. While there is an overall attitude change across all experiment conditions, we did not observe a significant effect of belief elicitation on attitude change. With regard to recall error, while participants in the draw trend elicitation exhibited significantly lower recall error than participants in the categorize trend condition, we found no significant difference in recall error when comparing elicitation conditions to no elicitation. In a follow-up study, we added contrasting narratives with the purpose of making the main visualization (communicating data on the focal issue) appear strikingly different. Compared to the results of Study 1, we found that contrasting narratives improved engagement in terms of surprise and interest but interestingly resulted in higher recall error and no significant change in attitude. We discuss the effects of elicitation and contrasting narratives in the context of topic involvement and the strengths of temporal trends encoded in the data visualization.

IEEE VIS 2024 Content: Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Room: To Be Announced

Keywords

Accessibility, Data Representations.

Abstract

The increasing ubiquity of data in everyday life has elevated the importance of data literacy and accessible data representations, particularly for individuals with disabilities. While prior research predominantly focuses on the needs of the visually impaired, our survey aims to broaden this scope by investigating accessible data representations across a more inclusive spectrum of disabilities. After conducting a systematic review of 152 accessible data representation papers from ACM and IEEE databases, we found that roughly 78% of existing articles center on vision impairments. In this paper, we conduct a comprehensive review of the remaining 22% of papers focused on underrepresented disability communities. We developed categorical dimensions based on accessibility, visualization, and human-computer interaction to classify the papers. These dimensions include the community of focus, issues addressed, contribution type, study methods, participants, data type, visualization type, and data domain. Our work redefines accessible data representations by illustrating their application for disabilities beyond those related to vision. Building on our literature review, we identify and discuss opportunities for future research in accessible data representations. All supplemental materials are available at https://osf.io/ yv4xm/?view only=7b36a3fbf7a14b3888029966faa3def9.

IEEE VIS 2024 Content: Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Brianna L. Wimer -

Laura South -

Keke Wu -

Danielle Albers Szafir -

Michelle A. Borkin -

Ronald A. Metoyer -

Room: To Be Announced

Keywords

Accessibility, Data Representations.

Abstract

The increasing ubiquity of data in everyday life has elevated the importance of data literacy and accessible data representations, particularly for individuals with disabilities. While prior research predominantly focuses on the needs of the visually impaired, our survey aims to broaden this scope by investigating accessible data representations across a more inclusive spectrum of disabilities. After conducting a systematic review of 152 accessible data representation papers from ACM and IEEE databases, we found that roughly 78% of existing articles center on vision impairments. In this paper, we conduct a comprehensive review of the remaining 22% of papers focused on underrepresented disability communities. We developed categorical dimensions based on accessibility, visualization, and human-computer interaction to classify the papers. These dimensions include the community of focus, issues addressed, contribution type, study methods, participants, data type, visualization type, and data domain. Our work redefines accessible data representations by illustrating their application for disabilities beyond those related to vision. Building on our literature review, we identify and discuss opportunities for future research in accessible data representations. All supplemental materials are available at https://osf.io/ yv4xm/?view only=7b36a3fbf7a14b3888029966faa3def9.

IEEE VIS 2024 Content: A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

Room: To Be Announced

Keywords

Gantt chart, stringline chart, Marey's graph, event sequence, empirical study

Abstract

We conduct two in-lab experiments (N=93) to evaluate the effectiveness of Gantt charts, extended Gantt charts, and stringline charts for visualizing fixed-order event sequence data. We first formulate five types of event sequences and define three types of sequence elements: point events, interval events, and the temporal gaps between them. Our two experiments focus on event sequences with a pre-defined, fixed order, and measure task error rates and completion time. The first experiment shows single sequences and assesses the three charts' performance in comparing event duration or gap. The second experiment shows multiple sequences and evaluates how well the charts reveal temporal patterns. The results suggest that when visualizing single fixed-order event sequences, 1) Gantt and extended Gantt charts lead to comparable error rates in the duration-comparing task; 2) Gantt charts exhibit either shorter or equal completion time than extended Gantt charts; 3) both Gantt and extended Gantt charts demonstrate shorter completion times than stringline charts; 4) however, stringline charts outperform the other two charts with fewer errors in the comparing task when event type counts are high. Additionally, when visualizing multiple point-based fixed-order event sequences, stringline charts require less time than Gantt charts for people to find temporal patterns. Based on these findings, we discuss design opportunities for visualizing fixed-order event sequences and discuss future avenues for optimizing these charts.

IEEE VIS 2024 Content: A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

Junxiu Tang -

Fumeng Yang -

Jiang Wu -

Yifang Wang -

Jiayi Zhou -

Xiwen Cai -

Lingyun Yu -

Yingcai Wu -

Room: To Be Announced

Keywords

Gantt chart, stringline chart, Marey's graph, event sequence, empirical study

Abstract

We conduct two in-lab experiments (N=93) to evaluate the effectiveness of Gantt charts, extended Gantt charts, and stringline charts for visualizing fixed-order event sequence data. We first formulate five types of event sequences and define three types of sequence elements: point events, interval events, and the temporal gaps between them. Our two experiments focus on event sequences with a pre-defined, fixed order, and measure task error rates and completion time. The first experiment shows single sequences and assesses the three charts' performance in comparing event duration or gap. The second experiment shows multiple sequences and evaluates how well the charts reveal temporal patterns. The results suggest that when visualizing single fixed-order event sequences, 1) Gantt and extended Gantt charts lead to comparable error rates in the duration-comparing task; 2) Gantt charts exhibit either shorter or equal completion time than extended Gantt charts; 3) both Gantt and extended Gantt charts demonstrate shorter completion times than stringline charts; 4) however, stringline charts outperform the other two charts with fewer errors in the comparing task when event type counts are high. Additionally, when visualizing multiple point-based fixed-order event sequences, stringline charts require less time than Gantt charts for people to find temporal patterns. Based on these findings, we discuss design opportunities for visualizing fixed-order event sequences and discuss future avenues for optimizing these charts.

IEEE VIS 2024 Content: Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Room: To Be Announced

Keywords

- I.6.9.g Visualization techniques and methodologies < I.6.9 Visualization < I.6 Simulation, Modeling, and Visualization < I Compu - G.3 Probability and Statistics < G Mathematics of Computing - G.3.n Statistical computing < G.3 Probability and Statistics < G Mathematics of Computing - G.3.p Stochastic processes < G.3 Probability and Statistics < G Mathematics of Computing

Abstract

Seasonal-trend decomposition based on loess (STL) is a powerful tool to explore time series data visually. In this paper, we present an extension of STL to uncertain data, named uncertainty-aware STL (UASTL). Our method propagates multivariate Gaussian distributions mathematically exactly through the entire analysis and visualization pipeline. Thereby, stochastic quantities shared between the components of the decomposition are preserved. Moreover, we present application scenarios with uncertainty modeling based on Gaussian processes, e.g., data with uncertain areas or missing values. Besides these mathematical results and modeling aspects, we introduce visualization techniques that address the challenges of uncertainty visualization and the problem of visualizing highly correlated components of a decomposition. The global uncertainty propagation enables the time series visualization with STL-consistent samples, the exploration of correlation between and within decomposition's components, and the analysis of the impact of varying uncertainty. Finally, we show the usefulness of UASTL and the importance of uncertainty visualization with several examples. Thereby, a comparison with conventional STL is performed.

IEEE VIS 2024 Content: Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Tim Krake -

Daniel Klötzl -

David Hägele -

Daniel Weiskopf -

Room: To Be Announced

Keywords

- I.6.9.g Visualization techniques and methodologies < I.6.9 Visualization < I.6 Simulation, Modeling, and Visualization < I Compu - G.3 Probability and Statistics < G Mathematics of Computing - G.3.n Statistical computing < G.3 Probability and Statistics < G Mathematics of Computing - G.3.p Stochastic processes < G.3 Probability and Statistics < G Mathematics of Computing

Abstract

Seasonal-trend decomposition based on loess (STL) is a powerful tool to explore time series data visually. In this paper, we present an extension of STL to uncertain data, named uncertainty-aware STL (UASTL). Our method propagates multivariate Gaussian distributions mathematically exactly through the entire analysis and visualization pipeline. Thereby, stochastic quantities shared between the components of the decomposition are preserved. Moreover, we present application scenarios with uncertainty modeling based on Gaussian processes, e.g., data with uncertain areas or missing values. Besides these mathematical results and modeling aspects, we introduce visualization techniques that address the challenges of uncertainty visualization and the problem of visualizing highly correlated components of a decomposition. The global uncertainty propagation enables the time series visualization with STL-consistent samples, the exploration of correlation between and within decomposition's components, and the analysis of the impact of varying uncertainty. Finally, we show the usefulness of UASTL and the importance of uncertainty visualization with several examples. Thereby, a comparison with conventional STL is performed.

IEEE VIS 2024 Content: Accelerating hyperbolic t-SNE

Accelerating hyperbolic t-SNE

Room: To Be Announced

Keywords

Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Quantitative Methods (q-bio.QM); Machine Learning (stat.ML) Dimensionality reduction, t-SNE, hyperbolic embedding, acceleration structure

Abstract

The need to understand the structure of hierarchical or high-dimensional data is present in a variety of fields. Hyperbolic spaces have proven to be an important tool for embedding computations and analysis tasks as their non-linear nature lends itself well to tree or graph data. Subsequently, they have also been used in the visualization of high-dimensional data, where they exhibit increased embedding performance. However, none of the existing dimensionality reduction methods for embedding into hyperbolic spaces scale well with the size of the input data. That is because the embeddings are computed via iterative optimization schemes and the computation cost of every iteration is quadratic in the size of the input. Furthermore, due to the non-linear nature of hyperbolic spaces, Euclidean acceleration structures cannot directly be translated to the hyperbolic setting. This paper introduces the first acceleration structure for hyperbolic embeddings, building upon a polar quadtree. We compare our approach with existing methods and demonstrate that it computes embeddings of similar quality in significantly less time. Implementation and scripts for the experiments can be found at this https URL.

IEEE VIS 2024 Content: Accelerating hyperbolic t-SNE

Accelerating hyperbolic t-SNE

Martin Skrodzki -

Hunter van Geffen -

Nicolas F. Chaves-de-Plaza -

Thomas Höllt -

Elmar Eisemann -

Klaus Hildebrandt -

Room: To Be Announced

Keywords

Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Quantitative Methods (q-bio.QM); Machine Learning (stat.ML) Dimensionality reduction, t-SNE, hyperbolic embedding, acceleration structure

Abstract

The need to understand the structure of hierarchical or high-dimensional data is present in a variety of fields. Hyperbolic spaces have proven to be an important tool for embedding computations and analysis tasks as their non-linear nature lends itself well to tree or graph data. Subsequently, they have also been used in the visualization of high-dimensional data, where they exhibit increased embedding performance. However, none of the existing dimensionality reduction methods for embedding into hyperbolic spaces scale well with the size of the input data. That is because the embeddings are computed via iterative optimization schemes and the computation cost of every iteration is quadratic in the size of the input. Furthermore, due to the non-linear nature of hyperbolic spaces, Euclidean acceleration structures cannot directly be translated to the hyperbolic setting. This paper introduces the first acceleration structure for hyperbolic embeddings, building upon a polar quadtree. We compare our approach with existing methods and demonstrate that it computes embeddings of similar quality in significantly less time. Implementation and scripts for the experiments can be found at this https URL.

IEEE VIS 2024 Content: Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation

Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation

Room: To Be Announced

Keywords

Iso-surface extraction, implicit neural representation, uncertainty propagation, affine arithmetic.

Abstract

Implicit Neural representations (INRs) are widely used for scientific data reduction and visualization by modeling the function that maps a spatial location to a data value. Without any prior knowledge about the spatial distribution of values, we are forced to sample densely from INRs to perform visualization tasks like iso-surface extraction which can be very computationally expensive. Recently, range analysis has shown promising results in improving the efficiency of geometric queries, such as ray casting and hierarchical mesh extraction, on INRs for 3D geometries by using arithmetic rules to bound the output range of the network within a spatial region. However, the analysis bounds are often too conservative for complex scientific data. In this paper, we present an improved technique for range analysis by revisiting the arithmetic rules and analyzing the probability distribution of the network output within a spatial region. We model this distribution efficiently as a Gaussian distribution by applying the central limit theorem. Excluding low probability values, we are able to tighten the output bounds, resulting in a more accurate estimation of the value range, and hence more accurate identification of iso-surface cells and more efficient iso-surface extraction on INRs. Our approach demonstrates superior performance in terms of the iso-surface extraction time on four datasets compared to the original range analysis method and can also be generalized to other geometric query tasks.

IEEE VIS 2024 Content: Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation

Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation

Haoyu Li -

Han-Wei Shen -

Room: To Be Announced

Keywords

Iso-surface extraction, implicit neural representation, uncertainty propagation, affine arithmetic.

Abstract

Implicit Neural representations (INRs) are widely used for scientific data reduction and visualization by modeling the function that maps a spatial location to a data value. Without any prior knowledge about the spatial distribution of values, we are forced to sample densely from INRs to perform visualization tasks like iso-surface extraction which can be very computationally expensive. Recently, range analysis has shown promising results in improving the efficiency of geometric queries, such as ray casting and hierarchical mesh extraction, on INRs for 3D geometries by using arithmetic rules to bound the output range of the network within a spatial region. However, the analysis bounds are often too conservative for complex scientific data. In this paper, we present an improved technique for range analysis by revisiting the arithmetic rules and analyzing the probability distribution of the network output within a spatial region. We model this distribution efficiently as a Gaussian distribution by applying the central limit theorem. Excluding low probability values, we are able to tighten the output bounds, resulting in a more accurate estimation of the value range, and hence more accurate identification of iso-surface cells and more efficient iso-surface extraction on INRs. Our approach demonstrates superior performance in terms of the iso-surface extraction time on four datasets compared to the original range analysis method and can also be generalized to other geometric query tasks.

IEEE VIS 2024 Content: LEVA: Using Large Language Models to Enhance Visual Analytics

LEVA: Using Large Language Models to Enhance Visual Analytics

Room: To Be Announced

Keywords

Insight recommendation, mixed-initiative, interface agent, large language models, visual analytics

Abstract

Visual analytics supports data analysis tasks within complex domain problems. However, due to the richness of data types, visual designs, and interaction designs, users need to recall and process a significant amount of information when they visually analyze data. These challenges emphasize the need for more intelligent visual analytics methods. Large language models have demonstrated the ability to interpret various forms of textual data, offering the potential to facilitate intelligent support for visual analytics. We propose LEVA, a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. To support onboarding, we use large language models to interpret visualization designs and view relationships based on system specifications. For exploration, we use large language models to recommend insights based on the analysis of system status and data to facilitate mixed-initiative exploration. For summarization, we present a selective reporting strategy to retrace analysis history through a stream visualization and generate insight reports with the help of large language models. We demonstrate how LEVA can be integrated into existing visual analytics systems. Two usage scenarios and a user study suggest that LEVA effectively aids users in conducting visual analytics.

IEEE VIS 2024 Content: LEVA: Using Large Language Models to Enhance Visual Analytics

LEVA: Using Large Language Models to Enhance Visual Analytics

Yuheng Zhao -

Yixing Zhang -

Yu Zhang -

Xinyi Zhao -

Junjie Wang -

Zekai Shao -

Cagatay Turkay -

Siming Chen -

Room: To Be Announced

Keywords

Insight recommendation, mixed-initiative, interface agent, large language models, visual analytics

Abstract

Visual analytics supports data analysis tasks within complex domain problems. However, due to the richness of data types, visual designs, and interaction designs, users need to recall and process a significant amount of information when they visually analyze data. These challenges emphasize the need for more intelligent visual analytics methods. Large language models have demonstrated the ability to interpret various forms of textual data, offering the potential to facilitate intelligent support for visual analytics. We propose LEVA, a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. To support onboarding, we use large language models to interpret visualization designs and view relationships based on system specifications. For exploration, we use large language models to recommend insights based on the analysis of system status and data to facilitate mixed-initiative exploration. For summarization, we present a selective reporting strategy to retrace analysis history through a stream visualization and generate insight reports with the help of large language models. We demonstrate how LEVA can be integrated into existing visual analytics systems. Two usage scenarios and a user study suggest that LEVA effectively aids users in conducting visual analytics.

IEEE VIS 2024 Content: ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

Room: To Be Announced

Keywords

Natural language interfaces, large language models, data visualization

Abstract

The use of natural language interfaces (NLIs) to create charts is becoming increasingly popular due to the intuitiveness of natural language interactions. One key challenge in this approach is to accurately capture user intents and transform them to proper chart specifications. This obstructs the wide use of NLI in chart generation, as users' natural language inputs are generally abstract (i.e., ambiguous or under-specified), without a clear specification of visual encodings. Recently, pre-trained large language models (LLMs) have exhibited superior performance in understanding and generating natural language, demonstrating great potential for downstream tasks. Inspired by this major trend, we propose ChartGPT, generating charts from abstract natural language inputs. However, LLMs are struggling to address complex logic problems. To enable the model to accurately specify the complex parameters and perform operations in chart generation, we decompose the generation process into a step-by-step reasoning pipeline, so that the model only needs to reason a single and specific sub-task during each run. Moreover, LLMs are pre-trained on general datasets, which might be biased for the task of chart generation. To provide adequate visualization knowledge, we create a dataset consisting of abstract utterances and charts and improve model performance through fine-tuning. We further design an interactive interface for ChartGPT that allows users to check and modify the intermediate outputs of each step. The effectiveness of the proposed system is evaluated through quantitative evaluations and a user study.

IEEE VIS 2024 Content: ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

Yuan Tian -

Weiwei Cui -

Dazhen Deng -

Xinjing Yi -

Yurun Yang -

Haidong Zhang -

Yingcai Wu -

Room: To Be Announced

Keywords

Natural language interfaces, large language models, data visualization

Abstract

The use of natural language interfaces (NLIs) to create charts is becoming increasingly popular due to the intuitiveness of natural language interactions. One key challenge in this approach is to accurately capture user intents and transform them to proper chart specifications. This obstructs the wide use of NLI in chart generation, as users' natural language inputs are generally abstract (i.e., ambiguous or under-specified), without a clear specification of visual encodings. Recently, pre-trained large language models (LLMs) have exhibited superior performance in understanding and generating natural language, demonstrating great potential for downstream tasks. Inspired by this major trend, we propose ChartGPT, generating charts from abstract natural language inputs. However, LLMs are struggling to address complex logic problems. To enable the model to accurately specify the complex parameters and perform operations in chart generation, we decompose the generation process into a step-by-step reasoning pipeline, so that the model only needs to reason a single and specific sub-task during each run. Moreover, LLMs are pre-trained on general datasets, which might be biased for the task of chart generation. To provide adequate visualization knowledge, we create a dataset consisting of abstract utterances and charts and improve model performance through fine-tuning. We further design an interactive interface for ChartGPT that allows users to check and modify the intermediate outputs of each step. The effectiveness of the proposed system is evaluated through quantitative evaluations and a user study.

IEEE VIS 2024 Content: VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

Room: To Be Announced

Keywords

Personal data, augmented reality, data visualization, storytelling, short-form video

Abstract

With the rise of short-form video platforms and the increasing availability of data, we see the potential for people to share short-form videos embedded with data in situ (e.g., daily steps when running) to increase the credibility and expressiveness of their stories. However, creating and sharing such videos in situ is challenging since it involves multiple steps and skills (e.g., data visualization creation and video editing), especially for amateurs. By conducting a formative study (N=10) using three design probes, we collected the motivations and design requirements. We then built VisTellAR, a mobile AR authoring tool, to help amateur video creators embed data visualizations in short-form videos in situ. A two-day user study shows that participants (N=12) successfully created various videos with data visualizations in situ and they confirmed the ease of use and learning. AR pre-stage authoring was useful to assist people in setting up data visualizations in reality with more designs in camera movements and interaction with gestures and physical objects to storytelling.

IEEE VIS 2024 Content: VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

Wai Tong -

Kento Shigyo -

Lin-Ping Yuan -

Mingming Fan -

Ting-Chuen Pong -

Huamin Qu -

Meng Xia -

Room: To Be Announced

Keywords

Personal data, augmented reality, data visualization, storytelling, short-form video

Abstract

With the rise of short-form video platforms and the increasing availability of data, we see the potential for people to share short-form videos embedded with data in situ (e.g., daily steps when running) to increase the credibility and expressiveness of their stories. However, creating and sharing such videos in situ is challenging since it involves multiple steps and skills (e.g., data visualization creation and video editing), especially for amateurs. By conducting a formative study (N=10) using three design probes, we collected the motivations and design requirements. We then built VisTellAR, a mobile AR authoring tool, to help amateur video creators embed data visualizations in short-form videos in situ. A two-day user study shows that participants (N=12) successfully created various videos with data visualizations in situ and they confirmed the ease of use and learning. AR pre-stage authoring was useful to assist people in setting up data visualizations in reality with more designs in camera movements and interaction with gestures and physical objects to storytelling.

IEEE VIS 2024 Content: Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Room: To Be Announced

Keywords

Cognition, small multiples, time-series data

Abstract

Small multiples are a popular visualization method, displaying different views of a dataset using multiple frames, often with the same scale and axes. However, there is a need to address their potential constraints, especially in the context of human cognitive capacity limits. These limits dictate the maximum information our mind can process at once. We explore the issue of capacity limitation by testing competing theories that describe how the number of frames shown in a display, the scale of the frames, and time constraints impact user performance with small multiples of line charts in an energy grid scenario. In two online studies (Experiment 1 n = 141 and Experiment 2 n = 360) and a follow-up eye-tracking analysis (n=5),we found a linear decline in accuracy with increasing frames across seven tasks, which was not fully explained by differences in frame size, suggesting visual search challenges. Moreover, the studies demonstrate that highlighting specific frames can mitigate some visual search difficulties but, surprisingly, not eliminate them. This research offers insights into optimizing the utility of small multiples by aligning them with human limitations.

IEEE VIS 2024 Content: Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Helia Hosseinpour -

Laura E. Matzen -

Kristin M. Divis -

Spencer C. Castro -

Lace Padilla -

Room: To Be Announced

Keywords

Cognition, small multiples, time-series data

Abstract

Small multiples are a popular visualization method, displaying different views of a dataset using multiple frames, often with the same scale and axes. However, there is a need to address their potential constraints, especially in the context of human cognitive capacity limits. These limits dictate the maximum information our mind can process at once. We explore the issue of capacity limitation by testing competing theories that describe how the number of frames shown in a display, the scale of the frames, and time constraints impact user performance with small multiples of line charts in an energy grid scenario. In two online studies (Experiment 1 n = 141 and Experiment 2 n = 360) and a follow-up eye-tracking analysis (n=5),we found a linear decline in accuracy with increasing frames across seven tasks, which was not fully explained by differences in frame size, suggesting visual search challenges. Moreover, the studies demonstrate that highlighting specific frames can mitigate some visual search difficulties but, surprisingly, not eliminate them. This research offers insights into optimizing the utility of small multiples by aligning them with human limitations.

IEEE VIS 2024 Content: Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Room: To Be Announced

Abstract

Visualization Recommendation Systems (VRSs) are a novel and challenging field of study aiming to help generate insightful visualizations from data and support non-expert users in information discovery. Among the many contributions proposed in this area, some systems embrace the ambitious objective of imitating human analysts to identify relevant relationships in data and make appropriate design choices to represent these relationships with insightful charts. We denote these systems as "agnostic" VRSs since they do not rely on human-provided constraints and rules but try to learn the task autonomously. Despite the high application potential of agnostic VRSs, their progress is hindered by several obstacles, including the absence of standardized datasets to train recommendation algorithms, the difficulty of learning design rules, and defining quantitative criteria for evaluating the perceptual effectiveness of generated plots. This paper summarizes the literature on agnostic VRSs and outlines promising future research directions.

IEEE VIS 2024 Content: Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Luca Podo -

Bardh Prenkaj -

Paola Velardi -

Room: To Be Announced

Abstract

Visualization Recommendation Systems (VRSs) are a novel and challenging field of study aiming to help generate insightful visualizations from data and support non-expert users in information discovery. Among the many contributions proposed in this area, some systems embrace the ambitious objective of imitating human analysts to identify relevant relationships in data and make appropriate design choices to represent these relationships with insightful charts. We denote these systems as "agnostic" VRSs since they do not rely on human-provided constraints and rules but try to learn the task autonomously. Despite the high application potential of agnostic VRSs, their progress is hindered by several obstacles, including the absence of standardized datasets to train recommendation algorithms, the difficulty of learning design rules, and defining quantitative criteria for evaluating the perceptual effectiveness of generated plots. This paper summarizes the literature on agnostic VRSs and outlines promising future research directions.

IEEE VIS 2024 Content: Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Room: To Be Announced

Keywords

Data visualization, Collaboration, History, Humanities, Writing, Navigation, Metadata

Abstract

Visualizing event timelines for collaborative text writing is an important application for navigating and understanding such data, as time passes and the size and complexity of both text and timeline increase. They are often employed by applications such as code repositories and collaborative text editors. In this paper, we present a visualization tool to explore historical records of writing of legislative texts, which were discussed and voted on by an assembly of representatives. Our visualization focuses on event timelines from text documents that involve multiple people and different topics, allowing for observation of different proposed versions of said text or tracking data provenance of given text sections, while highlighting the connections between all elements involved. We also describe the process of designing such a tool alongside domain experts, with three steps of evaluation being conducted to verify the effectiveness of our design.

IEEE VIS 2024 Content: Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Gabriel D. Cantareira -

Yiwen Xing -

Nicholas Cole -

Rita Borgo -

Alfie Abdul-Rahman -

Room: To Be Announced

Keywords

Data visualization, Collaboration, History, Humanities, Writing, Navigation, Metadata

Abstract

Visualizing event timelines for collaborative text writing is an important application for navigating and understanding such data, as time passes and the size and complexity of both text and timeline increase. They are often employed by applications such as code repositories and collaborative text editors. In this paper, we present a visualization tool to explore historical records of writing of legislative texts, which were discussed and voted on by an assembly of representatives. Our visualization focuses on event timelines from text documents that involve multiple people and different topics, allowing for observation of different proposed versions of said text or tracking data provenance of given text sections, while highlighting the connections between all elements involved. We also describe the process of designing such a tool alongside domain experts, with three steps of evaluation being conducted to verify the effectiveness of our design.

IEEE VIS 2024 Content: De-cluttering Scatterplots with Integral Images

De-cluttering Scatterplots with Integral Images

Room: To Be Announced

Abstract

Scatterplots provide a visual representation of bivariate data (or 2D embeddings of multivariate data) that allows for effective analyses of data dependencies, clusters, trends, and outliers. Unfortunately, classical scatterplots suffer from scalability issues, since growing data sizes eventually lead to overplotting and visual clutter on a screen with a fixed resolution, which hinders the data analysis process. We propose an algorithm that compensates for irregular sample distributions by a smooth transformation of the scatterplot's visual domain. Our algorithm evaluates the scatterplot's density distribution to compute a regularization mapping based on integral images of the rasterized density function. The mapping preserves the samples' neighborhood relations. Few regularization iterations suffice to achieve a nearly uniform sample distribution that efficiently uses the available screen space. We further propose approaches to visually convey the transformation that was applied to the scatterplot and compare them in a user study. We present a novel parallel algorithm for fast GPU-based integral-image computation, which allows for integrating our de-cluttering approach into interactive visual data analysis systems.

IEEE VIS 2024 Content: De-cluttering Scatterplots with Integral Images

De-cluttering Scatterplots with Integral Images

Hennes Rave -

Vladimir Molchanov -

Lars Linsen -

Room: To Be Announced

Abstract

Scatterplots provide a visual representation of bivariate data (or 2D embeddings of multivariate data) that allows for effective analyses of data dependencies, clusters, trends, and outliers. Unfortunately, classical scatterplots suffer from scalability issues, since growing data sizes eventually lead to overplotting and visual clutter on a screen with a fixed resolution, which hinders the data analysis process. We propose an algorithm that compensates for irregular sample distributions by a smooth transformation of the scatterplot's visual domain. Our algorithm evaluates the scatterplot's density distribution to compute a regularization mapping based on integral images of the rasterized density function. The mapping preserves the samples' neighborhood relations. Few regularization iterations suffice to achieve a nearly uniform sample distribution that efficiently uses the available screen space. We further propose approaches to visually convey the transformation that was applied to the scatterplot and compare them in a user study. We present a novel parallel algorithm for fast GPU-based integral-image computation, which allows for integrating our de-cluttering approach into interactive visual data analysis systems.

IEEE VIS 2024 Content: Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Room: To Be Announced

Abstract

Advanced manufacturing creates increasingly complex objects with material compositions that are often difficult to characterize by a single modality. Our domain scientists are going beyond traditional methods by employing both X-ray and neutron computed tomography to obtain complementary representations expected to better resolve material boundaries. However, the use of two modalities creates its own challenges for visualization, requiring either complex adjustments of multimodal transfer functions or the need for multiple views. Together with experts in nondestructive evaluation, we designed a novel interactive multimodal visualization approach to create a combined view of the co-registered X-ray and neutron acquisitions of industrial objects. Using an automatic topological segmentation of the bivariate histogram of X-ray and neutron values as a starting point, the system provides a simple yet effective interface to easily create, explore, and adjust a multimodal isualization. We propose a widget with simple brushing interactions that enables the user to quickly correct the segmented histogram results. Our semiautomated system enables domain experts to intuitively explore large multimodal datasets without the need for either advanced segmentation algorithms or knowledge of visualization echniques. We demonstrate our approach using synthetic examples, industrial phantom objects created to stress multimodal scanning techniques, and real-world objects, and we discuss expert feedback.

IEEE VIS 2024 Content: Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Huang, Xuan -

Miao, Haichao -

Kim, Hyojin -

Townsend, Andrew -

Champley, Kyle -

Tringe, Joseph -

Pascucci, Valerio -

Bremer, Peer-Timo -

Room: To Be Announced

Abstract

Advanced manufacturing creates increasingly complex objects with material compositions that are often difficult to characterize by a single modality. Our domain scientists are going beyond traditional methods by employing both X-ray and neutron computed tomography to obtain complementary representations expected to better resolve material boundaries. However, the use of two modalities creates its own challenges for visualization, requiring either complex adjustments of multimodal transfer functions or the need for multiple views. Together with experts in nondestructive evaluation, we designed a novel interactive multimodal visualization approach to create a combined view of the co-registered X-ray and neutron acquisitions of industrial objects. Using an automatic topological segmentation of the bivariate histogram of X-ray and neutron values as a starting point, the system provides a simple yet effective interface to easily create, explore, and adjust a multimodal isualization. We propose a widget with simple brushing interactions that enables the user to quickly correct the segmented histogram results. Our semiautomated system enables domain experts to intuitively explore large multimodal datasets without the need for either advanced segmentation algorithms or knowledge of visualization echniques. We demonstrate our approach using synthetic examples, industrial phantom objects created to stress multimodal scanning techniques, and real-world objects, and we discuss expert feedback.

IEEE VIS 2024 Content: Visual Analysis of Time-Stamped Event Sequences

Visual Analysis of Time-Stamped Event Sequences

Room: To Be Announced

Keywords

Time-Stamped Event Sequences, Time-Oriented Data, Visual Analytics, Data-First Design Study, Iterative Design, Visual Interfaces, User Evaluation

Abstract

Time-stamped event sequences (TSEQs) are time-oriented data without value information, shifting the focus of users to the exploration of temporal event occurrences. TSEQs exist in application domains, such as sleeping behavior, earthquake aftershocks, and stock market crashes. Domain experts face four challenges, for which they could use interactive and visual data analysis methods. First, TSEQs can be large with respect to both the number of sequences and events, often leading to millions of events. Second, domain experts need validated metrics and features to identify interesting patterns. Third, after identifying interesting patterns, domain experts contextualize the patterns to foster sensemaking. Finally, domain experts seek to reduce data complexity by data simplification and machine learning support. We present IVESA, a visual analytics approach for TSEQs. It supports the analysis of TSEQs at the granularities of sequences and events, supported with metrics and feature analysis tools. IVESA has multiple linked views that support overview, sort+filter, comparison, details-on-demand, and metadata relation-seeking tasks, as well as data simplification through feature analysis, interactive clustering, filtering, and motif detection and simplification. We evaluated IVESA with three case studies and a user study with six domain experts working with six different datasets and applications. Results demonstrate the usability and generalizability of IVESA across applications and cases that had up to 1,000,000 events.

IEEE VIS 2024 Content: Visual Analysis of Time-Stamped Event Sequences

Visual Analysis of Time-Stamped Event Sequences

Jürgen Bernard -

Clara-Maria Barth -

Eduard Cuba -

Andrea Meier -

Yasara Peiris -

Ben Shneiderman -

Room: To Be Announced

Keywords

Time-Stamped Event Sequences, Time-Oriented Data, Visual Analytics, Data-First Design Study, Iterative Design, Visual Interfaces, User Evaluation

Abstract

Time-stamped event sequences (TSEQs) are time-oriented data without value information, shifting the focus of users to the exploration of temporal event occurrences. TSEQs exist in application domains, such as sleeping behavior, earthquake aftershocks, and stock market crashes. Domain experts face four challenges, for which they could use interactive and visual data analysis methods. First, TSEQs can be large with respect to both the number of sequences and events, often leading to millions of events. Second, domain experts need validated metrics and features to identify interesting patterns. Third, after identifying interesting patterns, domain experts contextualize the patterns to foster sensemaking. Finally, domain experts seek to reduce data complexity by data simplification and machine learning support. We present IVESA, a visual analytics approach for TSEQs. It supports the analysis of TSEQs at the granularities of sequences and events, supported with metrics and feature analysis tools. IVESA has multiple linked views that support overview, sort+filter, comparison, details-on-demand, and metadata relation-seeking tasks, as well as data simplification through feature analysis, interactive clustering, filtering, and motif detection and simplification. We evaluated IVESA with three case studies and a user study with six domain experts working with six different datasets and applications. Results demonstrate the usability and generalizability of IVESA across applications and cases that had up to 1,000,000 events.

IEEE VIS 2024 Content: Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Room: To Be Announced

Keywords

Representation Learning, Multi-view Visualization, Visual Storytelling, Visualization Embedding

Abstract

The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.

IEEE VIS 2024 Content: Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Qing Chen -

Ying Chen -

Ruishi Zou -

Wei Shuai -

Yi Guo -

Jiazhe Wang -

Nan Cao -

Room: To Be Announced

Keywords

Representation Learning, Multi-view Visualization, Visual Storytelling, Visualization Embedding

Abstract

The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.

IEEE VIS 2024 Content: Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Room: To Be Announced

Keywords

Visualization, genomics, copy number variants, clinical decision support, evaluation

Abstract

Genomics is at the core of precision medicine, and there are high expectations on genomics-enabled improvement of patient outcomes in the years to come. Around the world, initiatives to increase the use of DNA sequencing in clinical routine are being deployed, such as the use of broad panels in the standard care for oncology patients. Such a development comes at the cost of increased demands on throughput in genomic data analysis. In this paper, we use the task of copy number variant (CNV) analysis as a context for exploring visualization concepts for clinical genomics. CNV calls are generated algorithmically, but time-consuming manual intervention is needed to separate relevant findings from irrelevant ones in the resulting large call candidate lists. We present a visualization environment, named Copycat, to support this review task in a clinical scenario. Key components are a scatter-glyph plot replacing the traditional list visualization, and a glyph representation designed for at-a-glance relevance assessments. Moreover, we present results from a formative evaluation of the prototype by domain specialists, from which we elicit insights to guide both prototype improvements and visualization for clinical genomics in general.

IEEE VIS 2024 Content: Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Emilia Ståhlbom -

Jesper Molin -

Claes Lundström -

Anders Ynnerman -

Room: To Be Announced

Keywords

Visualization, genomics, copy number variants, clinical decision support, evaluation

Abstract

Genomics is at the core of precision medicine, and there are high expectations on genomics-enabled improvement of patient outcomes in the years to come. Around the world, initiatives to increase the use of DNA sequencing in clinical routine are being deployed, such as the use of broad panels in the standard care for oncology patients. Such a development comes at the cost of increased demands on throughput in genomic data analysis. In this paper, we use the task of copy number variant (CNV) analysis as a context for exploring visualization concepts for clinical genomics. CNV calls are generated algorithmically, but time-consuming manual intervention is needed to separate relevant findings from irrelevant ones in the resulting large call candidate lists. We present a visualization environment, named Copycat, to support this review task in a clinical scenario. Key components are a scatter-glyph plot replacing the traditional list visualization, and a glyph representation designed for at-a-glance relevance assessments. Moreover, we present results from a formative evaluation of the prototype by domain specialists, from which we elicit insights to guide both prototype improvements and visualization for clinical genomics in general.

IEEE VIS 2024 Content: TTK is Getting MPI-Ready

TTK is Getting MPI-Ready

Room: To Be Announced

Keywords

Topological data analysis, high-performance computing, distributed-memory algorithms.

Abstract

This system paper documents the technical foundations for the extension of the Topology ToolKit (TTK) to distributed-memory parallelism with the Message Passing Interface (MPI). While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a versatile approach (supporting both triangulated domains and regular grids) for the support of topological analysis pipelines, i.e. a sequence of topological algorithms interacting together, possibly on distinct numbers of processes. While developing this extension, we faced several algorithmic and software engineering challenges, which we document in this paper. We describe an MPI extension of TTK’s data structure for triangulation representation and traversal, a central component to the global performance and generality of TTK’s topological implementations. We also introduce an intermediate interface between TTK and MPI, both at the global pipeline level, and at the fine-grain algorithmic level. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Detailed performance analyses show that parallel efficiencies range from 20% to 80% (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (120 billion vertices) on a standard cluster with 64 nodes (for a total of 1536 cores). Finally, we provide a roadmap for the completion of TTK’s MPI extension, along with generic recommendations for each algorithm communication category.

IEEE VIS 2024 Content: TTK is Getting MPI-Ready

TTK is Getting MPI-Ready

E. Le Guillou -

M. Will -

P. Guillou -

J. Lukasczyk -

P. Fortin -

C. Garth -

J. Tierny -

Room: To Be Announced

Keywords

Topological data analysis, high-performance computing, distributed-memory algorithms.

Abstract

This system paper documents the technical foundations for the extension of the Topology ToolKit (TTK) to distributed-memory parallelism with the Message Passing Interface (MPI). While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a versatile approach (supporting both triangulated domains and regular grids) for the support of topological analysis pipelines, i.e. a sequence of topological algorithms interacting together, possibly on distinct numbers of processes. While developing this extension, we faced several algorithmic and software engineering challenges, which we document in this paper. We describe an MPI extension of TTK’s data structure for triangulation representation and traversal, a central component to the global performance and generality of TTK’s topological implementations. We also introduce an intermediate interface between TTK and MPI, both at the global pipeline level, and at the fine-grain algorithmic level. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Detailed performance analyses show that parallel efficiencies range from 20% to 80% (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (120 billion vertices) on a standard cluster with 64 nodes (for a total of 1536 cores). Finally, we provide a roadmap for the completion of TTK’s MPI extension, along with generic recommendations for each algorithm communication category.

IEEE VIS 2024 Content: Active Gaze Labeling: Visualization for Trust Building

Active Gaze Labeling: Visualization for Trust Building

Room: To Be Announced

Keywords

Visual analytics, eye tracking, uncertainty, active learning, trust building

Abstract

Areas of interest (AOIs) are well-established means of providing semantic information for visualizing, analyzing, and classifying gaze data. However, the usual manual annotation of AOIs is time consuming and further impaired by ambiguities in label assignments. To address these issues, we present an interactive labeling approach that combines visualization, machine learning, and user-centered explainable annotation. Our system provides uncertainty-aware visualization to build trust in classification with an increasing number of annotated examples. It combines specifically designed EyeFlower glyphs, dimensionality reduction, and selection and exploration techniques in an integrated workflow. The approach is versatile and hardware-agnostic, supporting video stimuli from stationary and unconstrained mobile eye trackin alike. We conducted an expert review to assess labeling strategies and trust building.

IEEE VIS 2024 Content: Active Gaze Labeling: Visualization for Trust Building

Active Gaze Labeling: Visualization for Trust Building

Maurice Koch -

Nan Cao -

Daniel Weiskopf -

Kuno Kurzhals -

Room: To Be Announced

Keywords

Visual analytics, eye tracking, uncertainty, active learning, trust building

Abstract

Areas of interest (AOIs) are well-established means of providing semantic information for visualizing, analyzing, and classifying gaze data. However, the usual manual annotation of AOIs is time consuming and further impaired by ambiguities in label assignments. To address these issues, we present an interactive labeling approach that combines visualization, machine learning, and user-centered explainable annotation. Our system provides uncertainty-aware visualization to build trust in classification with an increasing number of annotated examples. It combines specifically designed EyeFlower glyphs, dimensionality reduction, and selection and exploration techniques in an integrated workflow. The approach is versatile and hardware-agnostic, supporting video stimuli from stationary and unconstrained mobile eye trackin alike. We conducted an expert review to assess labeling strategies and trust building.

IEEE VIS 2024 Content: MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

Room: To Be Announced

Keywords

Traffic signal control, multi-agent, reinforcement learning, visual analytics

Abstract

The issue of traffic congestion poses a significant obstacle to the development of global cities. One promising solution to tackle this problem is intelligent traffic signal control (TSC). Recently, TSC strategies leveraging reinforcement learning (RL) have garnered attention among researchers. However, the evaluation of these models has primarily relied on fixed metrics like reward and queue length. This limited evaluation approach provides only a narrow view of the model’s decision-making process, impeding its practical implementation. Moreover, effective TSC necessitates coordinated actions across multiple intersections. Existing visual analysis solutions fall short when applied in multi-agent settings. In this study, we delve into the challenge of interpretability in multi-agent reinforcement learning (MARL), particularly within the context of TSC. We propose MARLens, a visual analytics system tailored to understand MARL-based TSC. Our system serves as a versatile platform for both RL and TSC researchers. It empowers them to explore the model’s features from various perspectives, revealing its decision-making processes and shedding light on interactions among different agents. To facilitate quick identification of critical states, we have devised multiple visualization views, complemented by a traffic simulation module that allows users to replay specific training scenarios. To validate the utility of our proposed system, we present three comprehensive case studies, incorporate insights from domain experts through interviews, and conduct a user study. These collective efforts underscore the feasibility and effectiveness of MARLens in enhancing our understanding of MARL-based TSC systems and pave the way for more informed and efficient traffic management strategies.

IEEE VIS 2024 Content: MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

Yutian Zhang -

Guohong Zheng -

Zhiyuan Liu -

Quan Li -

Haipeng Zeng -

Room: To Be Announced

Keywords

Traffic signal control, multi-agent, reinforcement learning, visual analytics

Abstract

The issue of traffic congestion poses a significant obstacle to the development of global cities. One promising solution to tackle this problem is intelligent traffic signal control (TSC). Recently, TSC strategies leveraging reinforcement learning (RL) have garnered attention among researchers. However, the evaluation of these models has primarily relied on fixed metrics like reward and queue length. This limited evaluation approach provides only a narrow view of the model’s decision-making process, impeding its practical implementation. Moreover, effective TSC necessitates coordinated actions across multiple intersections. Existing visual analysis solutions fall short when applied in multi-agent settings. In this study, we delve into the challenge of interpretability in multi-agent reinforcement learning (MARL), particularly within the context of TSC. We propose MARLens, a visual analytics system tailored to understand MARL-based TSC. Our system serves as a versatile platform for both RL and TSC researchers. It empowers them to explore the model’s features from various perspectives, revealing its decision-making processes and shedding light on interactions among different agents. To facilitate quick identification of critical states, we have devised multiple visualization views, complemented by a traffic simulation module that allows users to replay specific training scenarios. To validate the utility of our proposed system, we present three comprehensive case studies, incorporate insights from domain experts through interviews, and conduct a user study. These collective efforts underscore the feasibility and effectiveness of MARLens in enhancing our understanding of MARL-based TSC systems and pave the way for more informed and efficient traffic management strategies.

IEEE VIS 2024 Content: FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

Room: To Be Announced

Keywords

Financial Data, Fund Manager Selection, Visual Analytics

Abstract

The fund investment industry heavily relies on the expertise of fund managers, who bear the responsibility of managing portfolios on behalf of clients. With their investment knowledge and professional skills, fund managers gain a competitive advantage over the average investor in the market. Consequently, investors prefer entrusting their investments to fund managers rather than directly investing in funds. For these investors, the primary concern is selecting a suitable fund manager. While previous studies have employed quantitative or qualitative methods to analyze various aspects of fund managers, such as performance metrics, personal characteristics, and performance persistence, they often face challenges when dealing with a large candidate space. Moreover, distinguishing whether a fund manager's performance stems from skill or luck poses a challenge, making it difficult to align with investors' preferences in the selection process. To address these challenges, this study characterizes the requirements of investors in selecting suitable fund managers and proposes an interactive visual analytics system called FMLens. This system streamlines the fund manager selection process, allowing investors to efficiently assess and deconstruct fund managers' investment styles and abilities across multiple dimensions. Additionally, the system empowers investors to scrutinize and compare fund managers' performances. The effectiveness of the approach is demonstrated through two case studies and a qualitative user study. Feedback from domain experts indicates that the system excels in analyzing fund managers from diverse perspectives, enhancing the efficiency of fund manager evaluation and selection.

IEEE VIS 2024 Content: FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

Longfei Chen -

Chen Cheng -

He Wang -

Xiyuan Wang -

Yun Tian -

Xuanwu Yue -

Wong Kam-Kwai -

Haipeng Zhang -

Suting Hong -

Quan Li -

Room: To Be Announced

Keywords

Financial Data, Fund Manager Selection, Visual Analytics

Abstract

The fund investment industry heavily relies on the expertise of fund managers, who bear the responsibility of managing portfolios on behalf of clients. With their investment knowledge and professional skills, fund managers gain a competitive advantage over the average investor in the market. Consequently, investors prefer entrusting their investments to fund managers rather than directly investing in funds. For these investors, the primary concern is selecting a suitable fund manager. While previous studies have employed quantitative or qualitative methods to analyze various aspects of fund managers, such as performance metrics, personal characteristics, and performance persistence, they often face challenges when dealing with a large candidate space. Moreover, distinguishing whether a fund manager's performance stems from skill or luck poses a challenge, making it difficult to align with investors' preferences in the selection process. To address these challenges, this study characterizes the requirements of investors in selecting suitable fund managers and proposes an interactive visual analytics system called FMLens. This system streamlines the fund manager selection process, allowing investors to efficiently assess and deconstruct fund managers' investment styles and abilities across multiple dimensions. Additionally, the system empowers investors to scrutinize and compare fund managers' performances. The effectiveness of the approach is demonstrated through two case studies and a qualitative user study. Feedback from domain experts indicates that the system excels in analyzing fund managers from diverse perspectives, enhancing the efficiency of fund manager evaluation and selection.

IEEE VIS 2024 Content: Reviving Static Charts into Live Charts

Reviving Static Charts into Live Charts

Room: To Be Announced

Keywords

Charts, storytelling, machine learning, automatic visualization

Abstract

Data charts are prevalent across various fields due to their efficacy in conveying complex data relationships. However, static charts may sometimes struggle to engage readers and efficiently present intricate information, potentially resulting in limited understanding. We introduce “Live Charts,” a new format of presentation that decomposes complex information within a chart and explains the information pieces sequentially through rich animations and accompanying audio narration. We propose an automated approach to revive static charts into Live Charts. Our method integrates GNN-based techniques to analyze the chart components and extract data from charts. Then we adopt large natural language models to generate appropriate animated visuals along with a voice-over to produce Live Charts from static ones. We conducted a thorough evaluation of our approach, which involved the model performance, use cases, a crowd-sourced user study, and expert interviews. The results demonstrate Live Charts offer a multi-sensory experience where readers can follow the information and understand the data insights better. We analyze the benefits and drawbacks of Live Charts over static charts as a new information consumption experience.

IEEE VIS 2024 Content: Reviving Static Charts into Live Charts

Reviving Static Charts into Live Charts

Velitchko Filipov -

Alessio Arleo -

Markus Bögl -

Silvia Miksch -

Room: To Be Announced

Keywords

Charts, storytelling, machine learning, automatic visualization

Abstract

Data charts are prevalent across various fields due to their efficacy in conveying complex data relationships. However, static charts may sometimes struggle to engage readers and efficiently present intricate information, potentially resulting in limited understanding. We introduce “Live Charts,” a new format of presentation that decomposes complex information within a chart and explains the information pieces sequentially through rich animations and accompanying audio narration. We propose an automated approach to revive static charts into Live Charts. Our method integrates GNN-based techniques to analyze the chart components and extract data from charts. Then we adopt large natural language models to generate appropriate animated visuals along with a voice-over to produce Live Charts from static ones. We conducted a thorough evaluation of our approach, which involved the model performance, use cases, a crowd-sourced user study, and expert interviews. The results demonstrate Live Charts offer a multi-sensory experience where readers can follow the information and understand the data insights better. We analyze the benefits and drawbacks of Live Charts over static charts as a new information consumption experience.

IEEE VIS 2024 Content: A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

Room: To Be Announced

Keywords

Point clouds, survey, non-photorealistic rendering

Abstract

Point clouds are widely used as a versatile representation of 3D entities and scenes for all scale domains and in a variety of application areas, serving as a fundamental data category to directly convey spatial features. However, due to point sparsity, lack of structure, irregular distribution, and acquisition-related inaccuracies, results of point cloud visualization are often subject to visual complexity and ambiguity. In this regard, non-photorealistic rendering can improve visual communication by reducing the cognitive effort required to understand an image or scene and by directing attention to important features. In the last 20 years, this has been demonstrated by various non-photorealistic r rendering approaches that were proposed to target point clouds specifically. However, they do not use a common language or structure for assessment which complicates comparison and selection. Further, recent developments regarding point cloud characteristics and processing, such as massive data size or web-based rendering are rarely considered. To address these issues, we present a survey on non-photorealistic rendering approaches for point cloud visualization, providing an overview of the current state of research. We derive a structure for the assessment of approaches, proposing seven primary dimensions for the categorization regarding intended goals, data requirements, used techniques, and mode of operation. We then systematically assess corresponding approaches and utilize this classification to identify trends and research gaps, motivating future research in the development of effective non-photorealistic point cloud rendering methods.

IEEE VIS 2024 Content: A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

Ole Wegen -

Willy Scheibel -

Matthias Trapp -

Rico Richter -

Jürgen Döllner -

Room: To Be Announced

Keywords

Point clouds, survey, non-photorealistic rendering

Abstract

Point clouds are widely used as a versatile representation of 3D entities and scenes for all scale domains and in a variety of application areas, serving as a fundamental data category to directly convey spatial features. However, due to point sparsity, lack of structure, irregular distribution, and acquisition-related inaccuracies, results of point cloud visualization are often subject to visual complexity and ambiguity. In this regard, non-photorealistic rendering can improve visual communication by reducing the cognitive effort required to understand an image or scene and by directing attention to important features. In the last 20 years, this has been demonstrated by various non-photorealistic r rendering approaches that were proposed to target point clouds specifically. However, they do not use a common language or structure for assessment which complicates comparison and selection. Further, recent developments regarding point cloud characteristics and processing, such as massive data size or web-based rendering are rarely considered. To address these issues, we present a survey on non-photorealistic rendering approaches for point cloud visualization, providing an overview of the current state of research. We derive a structure for the assessment of approaches, proposing seven primary dimensions for the categorization regarding intended goals, data requirements, used techniques, and mode of operation. We then systematically assess corresponding approaches and utilize this classification to identify trends and research gaps, motivating future research in the development of effective non-photorealistic point cloud rendering methods.

IEEE VIS 2024 Content: Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Room: To Be Announced

Keywords

Stakeholders, Nonfungible Tokens, Social Networking Online, Visual Analytics, Network Analyzers, Measurement, Layout, Impact Dynamics Analysis, Non Fungible Tokens NF Ts, NFT Transaction Data, Substitutive Systems, Visual Analytics

Abstract

Impact dynamics are crucial for estimating the growth patterns of NFT projects by tracking the diffusion and decay of their relative appeal among stakeholders. Machine learning methods for impact dynamics analysis are incomprehensible and rigid in terms of their interpretability and transparency, whilst stakeholders require interactive tools for informed decision-making. Nevertheless, developing such a tool is challenging due to the substantial, heterogeneous NFT transaction data and the requirements for flexible, customized interactions. To this end, we integrate intuitive visualizations to unveil the impact dynamics of NFT projects. We first conduct a formative study and summarize analysis criteria, including substitution mechanisms, impact attributes, and design requirements from stakeholders. Next, we propose the Minimal Substitution Model to simulate substitutive systems of NFT projects that can be feasibly represented as node-link graphs. Particularly, we utilize attribute-aware techniques to embed the project status and stakeholder behaviors in the layout design. Accordingly, we develop a multi-view visual analytics system, namely NFTracer, allowing interactive analysis of impact dynamics in NFT transactions. We demonstrate the informativeness, effectiveness, and usability of NFTracer by performing two case studies with domain experts and one user study with stakeholders. The studies suggest that NFT projects featuring a higher degree of similarity are more likely to substitute each other. The impact of NFT projects within substitutive systems is contingent upon the degree of stakeholders’ influx and projects’ freshness.

IEEE VIS 2024 Content: Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Yifan Cao -

Qing Shi -

Lucas Shen -

Kani Chen -

Yang Wang -

Wei Zeng -

Huamin Qu -

Room: To Be Announced

Keywords

Stakeholders, Nonfungible Tokens, Social Networking Online, Visual Analytics, Network Analyzers, Measurement, Layout, Impact Dynamics Analysis, Non Fungible Tokens NF Ts, NFT Transaction Data, Substitutive Systems, Visual Analytics

Abstract

Impact dynamics are crucial for estimating the growth patterns of NFT projects by tracking the diffusion and decay of their relative appeal among stakeholders. Machine learning methods for impact dynamics analysis are incomprehensible and rigid in terms of their interpretability and transparency, whilst stakeholders require interactive tools for informed decision-making. Nevertheless, developing such a tool is challenging due to the substantial, heterogeneous NFT transaction data and the requirements for flexible, customized interactions. To this end, we integrate intuitive visualizations to unveil the impact dynamics of NFT projects. We first conduct a formative study and summarize analysis criteria, including substitution mechanisms, impact attributes, and design requirements from stakeholders. Next, we propose the Minimal Substitution Model to simulate substitutive systems of NFT projects that can be feasibly represented as node-link graphs. Particularly, we utilize attribute-aware techniques to embed the project status and stakeholder behaviors in the layout design. Accordingly, we develop a multi-view visual analytics system, namely NFTracer, allowing interactive analysis of impact dynamics in NFT transactions. We demonstrate the informativeness, effectiveness, and usability of NFTracer by performing two case studies with domain experts and one user study with stakeholders. The studies suggest that NFT projects featuring a higher degree of similarity are more likely to substitute each other. The impact of NFT projects within substitutive systems is contingent upon the degree of stakeholders’ influx and projects’ freshness.

IEEE VIS 2024 Content: KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

Room: To Be Announced

Keywords

Medical Text Labeling, Expert Knowledge, Embedding Network, Visual Cluster Analysis, Active Learning

Abstract

The process of labeling medical text plays a crucial role in medical research. Nonetheless, creating accurately labeled medical texts of high quality is often a time-consuming task that requires specialized domain knowledge. Traditional methods for generating labeled data typically rely on rigid rule-based approaches, which may not adapt well to new tasks. While recent machine learning (ML) methodologies have mitigated the manual labeling efforts, configuring models to align with specific research requirements can be challenging for labelers without technical expertise. Moreover, automated labeling techniques, such as transfer learning, face difficulties in in directly incorporating expert input, whereas semi-automated methods, like data programming, allow knowledge integration through rules or knowledge bases but may lack continuous result refinement throughout the entire labeling process. In this study, we present a collaborative human-ML teaming workflow that seamlessly integrates visual cluster analysis and active learning to assist domain experts in labeling medical text with high efficiency. Additionally, we introduce an innovative neural network model called the embedding network, which incorporates expert insights to generate task-specific embeddings for medical texts. We integrate the workflow and embedding network into a visual analytics tool named KMTLabeler, equipped with coordinated multi-level views and interactions. Two illustrative case studies, along with a controlled user study, provide substantial evidence of the effectiveness of KMTLabeler in creating an efficient labeling environment for medical text classification.

IEEE VIS 2024 Content: KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

He Wang -

Yang Ouyang -

Yuchen Wu -

Chang Jiang -

Lixia Jin -

Yuanwu Cao -

Quan Li -

Room: To Be Announced

Keywords

Medical Text Labeling, Expert Knowledge, Embedding Network, Visual Cluster Analysis, Active Learning

Abstract

The process of labeling medical text plays a crucial role in medical research. Nonetheless, creating accurately labeled medical texts of high quality is often a time-consuming task that requires specialized domain knowledge. Traditional methods for generating labeled data typically rely on rigid rule-based approaches, which may not adapt well to new tasks. While recent machine learning (ML) methodologies have mitigated the manual labeling efforts, configuring models to align with specific research requirements can be challenging for labelers without technical expertise. Moreover, automated labeling techniques, such as transfer learning, face difficulties in in directly incorporating expert input, whereas semi-automated methods, like data programming, allow knowledge integration through rules or knowledge bases but may lack continuous result refinement throughout the entire labeling process. In this study, we present a collaborative human-ML teaming workflow that seamlessly integrates visual cluster analysis and active learning to assist domain experts in labeling medical text with high efficiency. Additionally, we introduce an innovative neural network model called the embedding network, which incorporates expert insights to generate task-specific embeddings for medical texts. We integrate the workflow and embedding network into a visual analytics tool named KMTLabeler, equipped with coordinated multi-level views and interactions. Two illustrative case studies, along with a controlled user study, provide substantial evidence of the effectiveness of KMTLabeler in creating an efficient labeling environment for medical text classification.

IEEE VIS 2024 Content: PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

Room: To Be Announced

Keywords

Text visualization, image visualization, text-to-image generation, editing history, provenance, generative art

Abstract

Generative text-to-image models, which allow users to create appealing images through a text prompt, have seen a dramatic increase in popularity in recent years. However, most users have a limited understanding of how such models work and often rely on trial and error strategies to achieve satisfactory results. The prompt history contains a wealth of information that could provide users with insights into what has been explored and how the prompt changes impact the output image, yet little research attention has been paid to the visual analysis of such process to support users. We propose the Image Variant Graph, a novel visual representation designed to support comparing prompt-image pairs and exploring the editing history. The Image Variant Graph models prompt differences as edges between corresponding images and presents the distances between images through projection. Based on the graph, we developed the PrompTHis system through co-design with artists. Based on the review and analysis of the prompting history, users can better understand the impact of prompt changes and have a more effective control of image generation. A quantitative user study and qualitative interviews demonstrate that PrompTHis can help users review the prompt history, make sense of the model, and plan their creative process.

IEEE VIS 2024 Content: PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

Yuhan Guo -

Hanning Shao -

Can Liu -

Kai Xu -

Xiaoru Yuan -

Room: To Be Announced

Keywords

Text visualization, image visualization, text-to-image generation, editing history, provenance, generative art

Abstract

Generative text-to-image models, which allow users to create appealing images through a text prompt, have seen a dramatic increase in popularity in recent years. However, most users have a limited understanding of how such models work and often rely on trial and error strategies to achieve satisfactory results. The prompt history contains a wealth of information that could provide users with insights into what has been explored and how the prompt changes impact the output image, yet little research attention has been paid to the visual analysis of such process to support users. We propose the Image Variant Graph, a novel visual representation designed to support comparing prompt-image pairs and exploring the editing history. The Image Variant Graph models prompt differences as edges between corresponding images and presents the distances between images through projection. Based on the graph, we developed the PrompTHis system through co-design with artists. Based on the review and analysis of the prompting history, users can better understand the impact of prompt changes and have a more effective control of image generation. A quantitative user study and qualitative interviews demonstrate that PrompTHis can help users review the prompt history, make sense of the model, and plan their creative process.

IEEE VIS 2024 Content: WonderFlow: Narration-Centric Design of Animated Data Videos

WonderFlow: Narration-Centric Design of Animated Data Videos

Room: To Be Announced

Keywords

Data video, Data visualization, Narration-animation interplay, Storytelling, Authoring tool

Abstract

Creating an animated data video with audio narration is a time-consuming and complex task that requires expertise. It involves designing complex animations, turning written scripts into audio narrations, and synchronizing visual changes with the narrations. This paper presents WonderFlow, an interactive authoring tool, that facilitates narration-centric design of animated data videos. WonderFlow allows authors to easily specify semantic links between text and the corresponding chart elements. Then it automatically generates audio narration by leveraging text-to-speech techniques and aligns the narration with an animation. WonderFlow provides a structure-aware animation library designed to ease chart animation creation, enabling authors to apply pre-designed animation effects to common visualization components. Additionally, authors can preview and refine their data videos within the same system, without having to switch between different creation tools. A series of evaluation results confirmed that WonderFlow is easy to use and simplifies the creation of data videos with narration-animation interplay.

IEEE VIS 2024 Content: WonderFlow: Narration-Centric Design of Animated Data Videos

WonderFlow: Narration-Centric Design of Animated Data Videos

Yun Wang -

Leixian Shen -

Zhengxin You -

Xinhuan Shu -

Bongshin Lee -

John Thompson -

Haidong Zhang -

Dongmei Zhang -

Room: To Be Announced

Keywords

Data video, Data visualization, Narration-animation interplay, Storytelling, Authoring tool

Abstract

Creating an animated data video with audio narration is a time-consuming and complex task that requires expertise. It involves designing complex animations, turning written scripts into audio narrations, and synchronizing visual changes with the narrations. This paper presents WonderFlow, an interactive authoring tool, that facilitates narration-centric design of animated data videos. WonderFlow allows authors to easily specify semantic links between text and the corresponding chart elements. Then it automatically generates audio narration by leveraging text-to-speech techniques and aligns the narration with an animation. WonderFlow provides a structure-aware animation library designed to ease chart animation creation, enabling authors to apply pre-designed animation effects to common visualization components. Additionally, authors can preview and refine their data videos within the same system, without having to switch between different creation tools. A series of evaluation results confirmed that WonderFlow is easy to use and simplifies the creation of data videos with narration-animation interplay.

IEEE VIS 2024 Content: “Nanomatrix: Scalable Construction of Crowded Biological Environments”

“Nanomatrix: Scalable Construction of Crowded Biological Environments”

Room: To Be Announced

Keywords

Interactive rendering, view-guided scene construction, biological data, hardware ray tracing

Abstract

We present a novel method for the interactive construction and rendering of extremely large molecular scenes, capable of representing multiple biological cells in atomistic detail. Our method is tailored for scenes, which are procedurally constructed, based on a given set of building rules. Rendering of large scenes normally requires the entire scene available in-core, or alternatively, it requires out-of-core management to load data into the memory hierarchy as a part of the rendering loop. Instead of out-of-core memory management, we propose to procedurally generate the scene on-demand on the fly. The key idea is a positional- and view-dependent procedural scene-construction strategy, where only a fraction of the atomistic scene around the camera is available in the GPU memory at any given time. The atomistic detail is populated into a uniform-space partitioning using a grid that covers the entire scene. Most of the grid cells are not filled with geometry, only those are populated that are potentially seen by the camera. The atomistic detail is populated in a compute shader and its representation is connected with acceleration data structures for hardware ray-tracing of modern GPUs. Objects which are far away, where atomistic detail is not perceivable from a given viewpoint, are represented by a triangle mesh mapped with a seamless texture, generated from the rendering of geometry from atomistic detail. The algorithm consists of two pipelines, the construction-compute pipeline, and the rendering pipeline, which work together to render molecular scenes at an atomistic resolution far beyond the limit of the GPU memory containing trillions of atoms. We demonstrate our technique on multiple models of SARS-CoV-2 and the red blood cell.

IEEE VIS 2024 Content: “Nanomatrix: Scalable Construction of Crowded Biological Environments”

“Nanomatrix: Scalable Construction of Crowded Biological Environments”

Ruwayda Alharbi -

Ondˇrej Strnad -

Tobias Klein -

Ivan Viola -

Room: To Be Announced

Keywords

Interactive rendering, view-guided scene construction, biological data, hardware ray tracing

Abstract

We present a novel method for the interactive construction and rendering of extremely large molecular scenes, capable of representing multiple biological cells in atomistic detail. Our method is tailored for scenes, which are procedurally constructed, based on a given set of building rules. Rendering of large scenes normally requires the entire scene available in-core, or alternatively, it requires out-of-core management to load data into the memory hierarchy as a part of the rendering loop. Instead of out-of-core memory management, we propose to procedurally generate the scene on-demand on the fly. The key idea is a positional- and view-dependent procedural scene-construction strategy, where only a fraction of the atomistic scene around the camera is available in the GPU memory at any given time. The atomistic detail is populated into a uniform-space partitioning using a grid that covers the entire scene. Most of the grid cells are not filled with geometry, only those are populated that are potentially seen by the camera. The atomistic detail is populated in a compute shader and its representation is connected with acceleration data structures for hardware ray-tracing of modern GPUs. Objects which are far away, where atomistic detail is not perceivable from a given viewpoint, are represented by a triangle mesh mapped with a seamless texture, generated from the rendering of geometry from atomistic detail. The algorithm consists of two pipelines, the construction-compute pipeline, and the rendering pipeline, which work together to render molecular scenes at an atomistic resolution far beyond the limit of the GPU memory containing trillions of atoms. We demonstrate our technique on multiple models of SARS-CoV-2 and the red blood cell.

IEEE VIS 2024 Content: Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Room: To Be Announced

Keywords

Visualization literacy, Large language model, Visual communication

Abstract

With the growing complexity and volume of data, visualizations have become more intricate, often requiring advanced techniques to convey insights. These complex charts are prevalent in everyday life, and individuals who lack knowledge in data visualization may find them challenging to understand. This paper investigates using Large Language Models (LLMs) to help users with low data literacy understand complex visualizations. While previous studies focus on text interactions with users, we noticed that visual cues are also critical for interpreting charts. We introduce an LLM application that supports both text and visual interaction for guiding chart interpretation. Our study with 26 participants revealed that the in-situ support effectively assisted users in interpreting charts and enhanced learning by addressing specific chart-related questions and encouraging further exploration. Visual communication allowed participants to convey their interests straightforwardly, eliminating the need for textual descriptions. However, the LLM assistance led users to engage less with the system, resulting in fewer insights from the visualizations. This suggests that users, particularly those with lower data literacy and motivation, may have over-relied on the LLM agent. We discuss opportunities for deploying LLMs to enhance visualization literacy while emphasizing the need for a balanced approach.

IEEE VIS 2024 Content: Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Kiroong Choe -

Chaerin Lee -

Soohyun Lee -

Jiwon Song -

Aeri Cho -

Nam Wook Kim -

Jinwook Seo -

Room: To Be Announced

Keywords

Visualization literacy, Large language model, Visual communication

Abstract

With the growing complexity and volume of data, visualizations have become more intricate, often requiring advanced techniques to convey insights. These complex charts are prevalent in everyday life, and individuals who lack knowledge in data visualization may find them challenging to understand. This paper investigates using Large Language Models (LLMs) to help users with low data literacy understand complex visualizations. While previous studies focus on text interactions with users, we noticed that visual cues are also critical for interpreting charts. We introduce an LLM application that supports both text and visual interaction for guiding chart interpretation. Our study with 26 participants revealed that the in-situ support effectively assisted users in interpreting charts and enhanced learning by addressing specific chart-related questions and encouraging further exploration. Visual communication allowed participants to convey their interests straightforwardly, eliminating the need for textual descriptions. However, the LLM assistance led users to engage less with the system, resulting in fewer insights from the visualizations. This suggests that users, particularly those with lower data literacy and motivation, may have over-relied on the LLM agent. We discuss opportunities for deploying LLMs to enhance visualization literacy while emphasizing the need for a balanced approach.

IEEE VIS 2024 Content: The State of Reproducibility Stamps for Visualization Research Papers

The State of Reproducibility Stamps for Visualization Research Papers

Tobias Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Room: To Be Announced

Abstract

I analyze the evolution of papers certified by the Graphics Replicability Stamp Initiative (GRSI) to be reproducible, with a specific focus on the subset of publications that address visualization-related topics. With this analysis I show that, while the number of papers is increasing overall and within the visualization field, we still have to improve quite a bit to escape the replication crisis. I base my analysis on the data published by the GRSI as well as publication data for the different venues in visualization and lists of journal papers that have been presented at visualization-focused conferences. I also analyze the differences between the involved journals as well as the percentage of reproducible papers in the different presentation venues. Furthermore, I look at the authors of the publications and, in particular, their affiliation countries to see where most reproducible papers come from. Finally, I discuss potential reasons for the low reproducibility numbers and suggest possible ways to overcome these obstacles. This paper is reproducible itself, with source code and data available from github.com/tobiasisenberg/Visualization-Reproducibility as well as a free paper copy and all supplemental materials at osf.io/mvnbj.

\ No newline at end of file diff --git a/program/paper_w-beliv-1004.html b/program/paper_w-beliv-1004.html new file mode 100644 index 000000000..c5e8e0811 --- /dev/null +++ b/program/paper_w-beliv-1004.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: How Many Evaluations are Enough? A Position Paper on Evaluation Trend in Information Visualization

How Many Evaluations are Enough? A Position Paper on Evaluation Trend in Information Visualization

Feng Lin - University of North Carolina at Chapel Hill, Chapel Hill, United States

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

Md Dilshadur Rahman - University of Utah, Salt Lake City, United States

Danielle Albers Szafir - University of North Carolina-Chapel Hill, Chapel Hill, United States

Ghulam Jilani Quadri - University of Oklahoma, Norman, United States

Room: To Be Announced

Abstract

In the rapidly evolving field of information visualization, rigorous evaluation is essential for validating new techniques, understanding user interactions, and demonstrating the effectiveness of visualizations. The evaluation of visualization systems is fundamental to ensuring their effectiveness, usability, and impact. Faithful evaluations provide valuable insights into how users interact with and perceive the system, enabling designers to make informed decisions about design choices and improvements. However, an emerging trend of multiple evaluations within a single study raises critical questions about the sustainability, feasibility, and methodological rigor of such an approach. So, how many evaluations are enough? is a situational question and cannot be formulaically determined. Our objective is to summarize current trends and patterns to understand general practices across different contribution and evaluation types. New researchers and students, influenced by this trend, may believe-- multiple evaluations are necessary for a study. However, the number of evaluations in a study should depend on its contributions and merits, not on the trend of including multiple evaluations to strengthen a paper. In this position paper, we identify this trend through a non-exhaustive literature survey of TVCG papers from issue 1 in 2023 and 2024. We then discuss various evaluation strategy patterns in the information visualization field and how this paper will open avenues for further discussion.

\ No newline at end of file diff --git a/program/paper_w-beliv-1005.html b/program/paper_w-beliv-1005.html new file mode 100644 index 000000000..a94d19f05 --- /dev/null +++ b/program/paper_w-beliv-1005.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts

Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts

Seyda Öney - University of Stuttgart, Stuttgart, Germany

Moataz Abdelaal - University of Stuttgart, Stuttgart, Germany

Kuno Kurzhals - University of Stuttgart, Stuttgart, Germany

Paul Betz - University of Stuttgart, Stuttgart, Germany

Cordula Kropp - University of Stuttgart, Stuttgart, Germany

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Room: To Be Announced

Abstract

Various standardized tests exist that assess individuals' visualization literacy. Their use can help to draw conclusions from studies. However, it is not taken into account that the test itself can create a pressure situation where participants might fear being exposed and assessed negatively. This is especially problematic when testing domain experts in design studies. We conducted interviews with experts from different domains performing the Mini-VLAT test for visualization literacy to identify potential problems. Our participants reported that the time limit per question, ambiguities in the questions and visualizations, and missing steps in the test procedure mainly had an impact on their performance and content. We discuss possible changes to the test design to address these issues and how such assessment methods could be integrated into existing evaluation procedures.

\ No newline at end of file diff --git a/program/paper_w-beliv-1007.html b/program/paper_w-beliv-1007.html new file mode 100644 index 000000000..ae90943ad --- /dev/null +++ b/program/paper_w-beliv-1007.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Design-Specific Transforms In Visualization

Design-Specific Transforms In Visualization

eugene Wu - Columbia University, New York City, United States

Remco Chang - Tufts University, Medford, United States

Room: To Be Announced

Abstract

In visualization, the process of transforming raw data into visually comprehensible representations is pivotal. While existing models like the Information Visualization Reference Model describe the data-to-visual mapping process, they often overlook a crucial intermediary step: design-specific transformations. This process, occurring after data transformation but before visual-data mapping, further derives data, such as groupings, layout, and statistics, that are essential to properly render the visualization. In this paper, we advocate for a deeper exploration of design-specific transformations, highlighting their importance in understanding visualization properties, particularly in relation to user tasks. We incorporate design-specific transformations into the Information Visualization Reference Model and propose a new formalism that encompasses the user task as a function over data. The resulting formalism offers three key benefits over existing visualization models: (1) describing tasks as compositions of functions, (2) enabling analysis of data transformations for visual-data mapping, and (3) empowering reasoning about visualization correctness and effectiveness. We further discuss the potential implications of this model on visualization theory and visualization experiment design.

\ No newline at end of file diff --git a/program/paper_w-beliv-1008.html b/program/paper_w-beliv-1008.html new file mode 100644 index 000000000..78416f8f8 --- /dev/null +++ b/program/paper_w-beliv-1008.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Normalized Stress is Not Normalized: How to Interpret Stress Correctly

Normalized Stress is Not Normalized: How to Interpret Stress Correctly

Kiran Smelser - University of Arizona, Tucson, United States

Jacob Miller - University of Arizona, Tucson, United States

Stephen Kobourov - University of Arizona, Tucson, United States

Room: To Be Announced

Abstract

Stress is among the most commonly employed quality metrics and optimization criteria for dimension reduction projections of high-dimensional data. Complex, high-dimensional data is ubiquitous across many scientific disciplines, including machine learning, biology, and the social sciences. One of the primary methods of visualizing these datasets is with two-dimensional scatter plots that visually capture some properties of the data. Because visually determining the accuracy of these plots is challenging, researchers often use quality metrics to measure the projection’s accuracy or faithfulness to the full data. One of the most commonly employed metrics, normalized stress, is sensitive to uniform scaling (stretching, shrinking) of the projection, despite this act not meaningfully changing anything about the projection. We investigate the effect of scaling on stress and other distance-based quality metrics analytically and empirically by showing just how much the values change and how this affects dimension reduction technique evaluations. We introduce a simple technique to make normalized stress scale-invariant and show that it accurately captures expected behavior on a small benchmark.

\ No newline at end of file diff --git a/program/paper_w-beliv-1009.html b/program/paper_w-beliv-1009.html new file mode 100644 index 000000000..0b2b2d098 --- /dev/null +++ b/program/paper_w-beliv-1009.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: The Role of Metacognition in Understanding Deceptive Bar Charts

The Role of Metacognition in Understanding Deceptive Bar Charts

Antonia Schlieder - Heidelberg University, Heidelberg, Germany

Jan Rummel - Heidelberg University, Heidelberg, Germany

Peter Albers - Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany

Filip Sadlo - Heidelberg University, Heidelberg, Germany

Room: To Be Announced

Abstract

The cognitive processes involved in understanding and misunderstanding visualizations have not yet been fully clarified, even for well-studied designs, such as bar charts. In particular, little is known about whether viewers can improve their learning processes by getting better insight into their own cognition. This paper describes a simple method to measure the role of such metacognitive understanding when learning to read bar charts. For this purpose, we conducted an experiment in which we investigated bar chart learning repeatedly, and tested how learning over trials was effected by metacognitive understanding. We integrate the findings into a model of metacognitive processing of visualizations, and discuss implications for the design of visualizations.

\ No newline at end of file diff --git a/program/paper_w-beliv-1015.html b/program/paper_w-beliv-1015.html new file mode 100644 index 000000000..bb6390676 --- /dev/null +++ b/program/paper_w-beliv-1015.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design

Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design

Abhraneel Sarma - Northwestern University, Evanston, United States

Sheng Long - Northwestern University, Evanston, United States

Michael Correll - Northeastern University, Portland, United States

Matthew Kay - Northwestern University, Chicago, United States

Room: To Be Announced

Abstract

Empirical studies in visualisation often compare visual representations to identify the most effective visualisation for a particular visual judgement or decision making task. However, the effectiveness of a visualisation may be intrinsically related to, and difficult to distinguish from, factors such as visualisation literacy. Complicating matters further, visualisation literacy itself is not a singular intrinsic quality, but can be a result of several distinct challenges that a viewer encounters when performing a task with a visualisation. In this paper, we describe how such challenges apply to experiments that we use to evaluate visualisations, and discuss a set of considerations for designing studies in the future. Finally, we argue that aspects of the study design which are often neglected or overlooked (such as the onboarding of participants, tutorials, training etc.) can have a big role in the results of a study and can potentially impact the conclusions that the researchers can draw from the study.

\ No newline at end of file diff --git a/program/paper_w-beliv-1016.html b/program/paper_w-beliv-1016.html new file mode 100644 index 000000000..4523ce132 --- /dev/null +++ b/program/paper_w-beliv-1016.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Old Wine in a New Bottle? Analysis of Visual Lineups with Signal Detection Theory

Old Wine in a New Bottle? Analysis of Visual Lineups with Signal Detection Theory

Sheng Long - Northwestern University, Evanston, United States

Matthew Kay - Northwestern University, Chicago, United States

Room: To Be Announced

Abstract

This position paper critically examines the graphical inference framework for evaluating visualizations using the lineup task. We present a re-analysis of lineup task data using signal detection theory, applying four Bayesian non-linear models to investigate whether color ramps with more color name variation increase false discoveries. Our study utilizes data from Reda and Szafir’s previous work [20], corroborating their findings while providing additional insights into sensitivity and bias differences across colormaps and individuals. We suggest improvements to lineup study designs and explore the connections between graphical inference, signal detection theory, and statistical decision theory. Our work contributes a more perceptually grounded approach for assessing visualization effectiveness and offers a path forward for better aligning graphical inference methods with human cognition. The results have implications for the development and evaluation of visualizations, particularly for exploratory data analysis scenarios. Supplementary materials are available at https://osf.io/xd5cj/.

\ No newline at end of file diff --git a/program/paper_w-beliv-1018.html b/program/paper_w-beliv-1018.html new file mode 100644 index 000000000..bf03fe1b0 --- /dev/null +++ b/program/paper_w-beliv-1018.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Visualising Lived Experience: Learning from a Master and Alternative Narrative Framing

Visualising Lived Experience: Learning from a Master and Alternative Narrative Framing

Mai Elshehaly - City, University of London, London, United Kingdom

Mirela Reljan-Delaney - City, University of London, London, United Kingdom

Jason Dykes - City, University of London, London, United Kingdom

Aidan Slingsby - City, University of London, London, United Kingdom

Jo Wood - City, University of London, London, United Kingdom

Sam Spiegel - University of Edinburgh, Edinburgh, United Kingdom

Room: To Be Announced

Abstract

Visualising personal experiences is often described as a means for self-reflection, shaping one’s identity, and sharing it with others. In policymaking, personal narratives are regarded as an important source of intelligence to shape public discourse and policy. Therefore, policymakers are interested in the interplay between individual-level experiences and macro-political processes that play into shaping these experiences. In this context, visualisation is regarded as a medium for advocacy, creating a power balance between individuals and the power structures that influence their health and well-being. In this paper, we offer a politically-framed reflection on how visualisation creators define lived experience data, and what design choices they make for visualising them. We identify data characteristics and design choices that enable visualisation authors and consumers to engage in a process of narrative co-construction, while navigating structural forms of inequality. Our political framing is driven by ideas of master and alternative narratives from Diversity Science, in which authors and narrators engage in a process of negotiation with power structures to either maintain or challenge the status quo.

\ No newline at end of file diff --git a/program/paper_w-beliv-1020.html b/program/paper_w-beliv-1020.html new file mode 100644 index 000000000..8fd05d5d6 --- /dev/null +++ b/program/paper_w-beliv-1020.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Exploring Subjective Notions of Explainability through Counterfactual Visualization of Sentiment Analysis

Exploring Subjective Notions of Explainability through Counterfactual Visualization of Sentiment Analysis

Anamaria Crisan - Tableau Research, Seattle, United States

Nathan Butters - Tableau Software, Seattle, United States

Zoe Zoe - Tableau Software, Seattle, United States

Room: To Be Announced

Abstract

The generation and presentation of counterfactual explanations (CFEs) are a commonly used, model-agnostic approach to helping end-users reason about the validity of AI/ML model outputs. By demonstrating how sensitive the model's outputs are to minor variations, CFEs are thought to improve understanding of the model's behavior, identify potential biases, and increase the transparency of 'black box models'. Here, we examine how CFEs support a diverse audience, both with and without technical expertise, to understand the results of an LLM-informed sentiment analysis. We conducted a preliminary pilot study with ten individuals with varied expertise from ranging NLP, ML, and ethics, to specific domains. All individuals were actively using or working with AI/ML technology as part of their daily jobs. Through semi-structured interviews grounded in a set of concrete examples, we examined how CFEs influence participants' perceptions of the model's correctness, fairness, and trustworthiness, and how visualization of CFEs specifically influences those perceptions. We also surface how participants wrestle with their internal definitions of `explainability', relative to what CFEs present, their cultures, and backgrounds, in addition to the, much more widely studied phenomena, of comparing their baseline expectations of the model's performance. Compared to prior research, our findings highlight the sociotechnical frictions that CFEs surface but do not necessarily remedy. We conclude with the design implications of developing transparent AI/ML visualization systems for more general tasks.

\ No newline at end of file diff --git a/program/paper_w-beliv-1021.html b/program/paper_w-beliv-1021.html new file mode 100644 index 000000000..84676c666 --- /dev/null +++ b/program/paper_w-beliv-1021.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Merits and Limits of Preregistration for Visualization Research

Merits and Limits of Preregistration for Visualization Research

Lonni Besançon - Linköping University, Norrköping, Sweden

Brian Nosek - University of Virginia, Charlottesville, United States

Tamarinde Haven - Tilburg University, Tilburg, Netherlands

Miriah Meyer - Linköping University, Nörrkoping, Sweden

Cody Dunne - Northeastern University, Boston, United States

Mohammad Ghoniem - Luxembourg Institute of Science and Technology, Belvaux, Luxembourg

Room: To Be Announced

Abstract

The replication crisis has spawned a revolution in scientific methods, aimed at increasing the transparency, robustness, and reliability of scientific outcomes. In particular, the practice of preregistering study designs has shown important advantages. Preregistration can help limit questionable research practices, as well as increase the success rate of study replications. Many fields have now adopted preregistration as a default expectation for published studies. In 2022, we set up a panel ``Merits and Limits of User Study Preregistration'' with the overall goal of explaining the concept of preregistration to a wide VIS audience and discussing its suitability for visualization research. We report on the arguments and discussion of this panel in the hope that it can benefit the visualization community at large. All materials and a copy of this paper are available on our OSF repository at https://osf.io/wes57/.

\ No newline at end of file diff --git a/program/paper_w-beliv-1026.html b/program/paper_w-beliv-1026.html new file mode 100644 index 000000000..6c53d9b2f --- /dev/null +++ b/program/paper_w-beliv-1026.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Visualization Artifacts are Boundary Objects

Visualization Artifacts are Boundary Objects

Jasmine Tan Otto - UC Santa Cruz, Santa Cruz, United States

Scott Davidoff - California Institute of Technology, Pasadena, United States

Room: To Be Announced

Abstract

Despite 30+ years of academic practice, visualization still lacks an explanation of how and why it functions in complex organizations performing knowledge work. This survey examines the intersection of organizational studies and visualization design, highlighting the concept of \textit{boundary objects}, which visualization practitioners are adopting in both CSCW (computer-supported collaborative work) and HCI. This paper also collects the prior literature on boundary objects in visualization design studies, a methodology which maps closely to action research in organizations, and addresses the same problems of `knowing in common'. Process artifacts generated by visualization design studies function as boundary objects in their own right, facilitating knowledge transfer across disciplines within an organization. Currently, visualization faces the challenge of explaining how sense-making functions across domains, through visualization artifacts, and how these support decision-making. As a deeply interdisciplinary field, visualization should adopt the theory of boundary objects in order to embrace its plurality of domains and systems, whilst empowering its practitioners with a unified process-based theory.

\ No newline at end of file diff --git a/program/paper_w-beliv-1027.html b/program/paper_w-beliv-1027.html new file mode 100644 index 000000000..690c741f2 --- /dev/null +++ b/program/paper_w-beliv-1027.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: [position paper] The Visualization JUDGE : Can Multimodal Foundation Models Guide Visualization Design Through Visual Perception?

[position paper] The Visualization JUDGE : Can Multimodal Foundation Models Guide Visualization Design Through Visual Perception?

Matthew Berger - Vanderbilt University, Nashville, United States

Shusen Liu - Lawrence Livermore National Laboratory , Livermore, United States

Room: To Be Announced

Abstract

Foundation models for vision and language are the basis of AI applications across numerous sectors of society. The success of these models stems from their ability to mimic human capabilities, namely visual perception in vision models, and analytical reasoning in large language models. As visual perception and analysis are fundamental to data visualization, in this position paper we ask: how can we harness foundation models to advance progress in visualization design? Specifically, how can multimodal foundation models (MFMs) guide visualization design through visual perception? We approach these questions by investigating the effectiveness of MFMs for perceiving visualization, and formalizing the overall visualization design and optimization space. Specifically, we think that MFMs can best be viewed as judges, equipped with the ability to criticize visualizations, and provide us with actions on how to improve a visualization. We provide a deeper characterization for text-to-image generative models, and multi-modal large language models, organized by what these models provide as output, and how to utilize the output for guiding design decisions. We hope that our perspective can inspire researchers in visualization on how to approach MFMs for visualization design.

\ No newline at end of file diff --git a/program/paper_w-beliv-1033.html b/program/paper_w-beliv-1033.html new file mode 100644 index 000000000..cb6aac73f --- /dev/null +++ b/program/paper_w-beliv-1033.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: We Don't Know How to Assess LLM Contributions in VIS/HCI

We Don't Know How to Assess LLM Contributions in VIS/HCI

Anamaria Crisan - Tableau Research, Seattle, United States

Room: To Be Announced

Abstract

Submissions of original research that use Large Language Models (LLMs) or that study their behavior, suddenly account for a sizable portion of works submitted and accepted to visualization (VIS) conferences and similar venues in human-computer interaction (HCI). In this brief position paper, I argue that reviewers are relatively unprepared to evaluate these submissions effectively. To support this conjecture I reflect on my experience serving on four program committees for VIS and HCI conferences over the past year. I will describe common reviewer critiques that I observed and highlight how these critiques influence the review process. I also raise some concerns about these critiques that could limit applied LLM research to all but the best-resourced labs. While I conclude with suggestions for evaluating research contributions that incorporate LLMs, the ultimate goal of this position paper is to simulate a discussion on the review process and its challenges.

\ No newline at end of file diff --git a/program/paper_w-beliv-1034.html b/program/paper_w-beliv-1034.html new file mode 100644 index 000000000..f0bf67130 --- /dev/null +++ b/program/paper_w-beliv-1034.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in the Light of Advanced AI

Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in the Light of Advanced AI

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Room: To Be Announced

Abstract

This paper revisits the role of quantitative and qualitative methods in visualization research in the context of advancements in artificial intelligence (AI). The focus is on how we can bridge between the different methods in an integrated process of analyzing user study data. To this end, a process model of - potentially iterated - semantic enrichment of data is proposed. This joint perspective of data and semantics facilitates the integration of quantitative and qualitative methods. The model is motivated by examples of prior work, especially in the area of eye tracking user studies and coding data-rich observations. Finally, there is a discussion of open issues and research opportunities in the interplay between AI and qualitative and quantitative methods for visualization research.

\ No newline at end of file diff --git a/program/paper_w-beliv-1035.html b/program/paper_w-beliv-1035.html new file mode 100644 index 000000000..555a8e8cc --- /dev/null +++ b/program/paper_w-beliv-1035.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Complexity as Design Material

Complexity as Design Material

Florian Windhager - University for Continuing Education Krems, Krems, Austria

Alfie Abdul-Rahman - King's College London, London, United Kingdom

Mark-Jan Bludau - University of Applied Sciences Potsdam, Potsdam, Germany

Nicole Hengesbach - Warwick Institute for the Science of Cities, Coventry, United Kingdom

Houda Lamqaddam - University of Amsterdam, Amsterdam, Netherlands

Isabel Meirelles - OCAD University, Toronto, Canada

Bettina Speckmann - TU Eindhoven, Eindhoven, Netherlands

Michael Correll - Northeastern University, Portland, United States

Room: To Be Announced

Abstract

Complexity is often seen as a inherent negative in information design, with the job of the designer being to reduce or eliminate complexity, and with principles like Tufte’s “data-ink ratio” or “chartjunk” to operationalize minimalism and simplicity in visualizations. However, in this position paper, we call for a more expansive view of complexity as a design material, like color or texture or shape: an element of information design that can be used in many ways, many of which are beneficial to the goals of using data to understand the world around us. We describe complexity as a phenomenon that occurs not just in visual design but in every aspect of the sensemaking process, from data collection to interpretation. For each of these stages, we present examples of ways that these various forms of complexity can be used (or abused) in visualization design. We ultimately call on the visualization community to build a more nuanced view of complexity, to look for places to usefully integrate complexity in multiple stages of the design process, and, even when the goal is to reduce complexity, to look for the non-visual forms of complexity that may have otherwise been overlooked.

\ No newline at end of file diff --git a/program/paper_w-beliv-1037.html b/program/paper_w-beliv-1037.html new file mode 100644 index 000000000..5f147bf75 --- /dev/null +++ b/program/paper_w-beliv-1037.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Position paper: Proposing the use of an “Advocatus Diaboli” as a pragmatic approach to improve transparency in qualitative data analysis and reporting

Position paper: Proposing the use of an “Advocatus Diaboli” as a pragmatic approach to improve transparency in qualitative data analysis and reporting

Judith Friedl-Knirsch - University of Applied Sciences Upper Austria, Hagenberg, Austria

Room: To Be Announced

Abstract

Qualitative data analysis is widely adopted for user evaluation, not only in the Visualisation community but also related communities, such as Human-Computer Interaction and Augmented and Virtual Reality. However, the data analysis process is often not clearly described and the results are often simply listed in the form of interesting quotes from or summaries of quotes that were uttered by study participants. This position paper proposes an early concept for the use of a researcher as an “Advocatus Diaboli”, or devil’s advocate, to try to disprove the results of the data analysis by looking for quotes that contradict the findings or leading questions and task designs. Whatever this devil’s advocate finds can then be used to reiterate on the findings and the analysis process to form more suitable theories. On the other hand, researchers are enabled to clarify why they did not include this in their theory. This process could increase transparency in the qualitative data analysis process and increase trust in these findings, while being mindful of the necessary resources.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1007.html b/program/paper_w-eduvis-1007.html new file mode 100644 index 000000000..3f73cfd5f --- /dev/null +++ b/program/paper_w-eduvis-1007.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Beyond storytelling with data: Guidelines for designing exploratory visualizations

Beyond storytelling with data: Guidelines for designing exploratory visualizations

Jennifer Frazier - Science Communication Lab, Berkeley, United States. University of California, San Francisco, San Francisco, United States

Room: To Be Announced

Abstract

Visualizations are a critical medium not only for telling stories, but for fostering exploration. But while there are countless examples how to use visualizations for“storytelling with data,” there are few guidelines on how to design visualizations for public exploration. This educator report draws on decades of work in science museums, a public context focused on designing interactive experiences for exploration, to provide evidence-based guidelines for designing exploratory visualizations. Recent studies on interactive visualizations in museums are contextualized within a larger body of museum research on designs that support exploratory learning in interactive exhibits. Synthesizing these studies highlights that to create successful exploratory visualizations, designers can apply long-standing guidelines from exhibit design but need to provide more aids for interpretation.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1008.html b/program/paper_w-eduvis-1008.html new file mode 100644 index 000000000..c677b5dec --- /dev/null +++ b/program/paper_w-eduvis-1008.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Challenges and Opportunities of Teaching Data Visualization Together with Data Science

Challenges and Opportunities of Teaching Data Visualization Together with Data Science

Shri Harini Ramesh - Carleton University, Ottawa, Canada

Fateme Rajabiyazdi - Carleton University, Ottawa, Canada. Bruyere Research Institute, Ottawa, Canada

Room: To Be Announced

Abstract

With the increasing amount of data globally, analyzing and visualizing data are becoming essential skills across various professions. It is important to equip university students with these essential data skills. To learn, design, and develop data visualization, students need knowledge of programming and data science topics. Many university programs lack dedicated data science courses for undergraduate students, making it important to introduce these concepts through integrated courses. However, combining data science and data visualization into one course can be challenging due to the time constraints and the heavy load of learning. In this paper, we discuss the development of teaching data science and data visualization together in one course and share the results of the post-course evaluation survey. From the survey's results, we identified four challenges, including difficulty in learning multiple tools and diverse data science topics, varying proficiency levels with tools and libraries, and selecting and cleaning datasets. We also distilled five opportunities for developing a successful data science and visualization course. These opportunities include clarifying the course structure, emphasizing visualization literacy early in the course, updating the course content according to student needs, using large real-world datasets, learning from industry professionals, and promoting collaboration among students.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1010.html b/program/paper_w-eduvis-1010.html new file mode 100644 index 000000000..2bbbb4ca0 --- /dev/null +++ b/program/paper_w-eduvis-1010.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Implementing the Solution Framework in a Social Impact Project

Implementing the Solution Framework in a Social Impact Project

Victor Muñoz - Independent Information Designer, Medellin, Colombia. Independent Information Designer, Medellin, Colombia

Kevin Ford - Corporate Information Designer, Arlington Hts, United States. Corporate Information Designer, Arlington Hts, United States

Room: To Be Announced

Abstract

This report examines the implementation of the Solution Framework in a social impact project facilitated by VizForSocialGood. It outlines the data visualization process, detailing each stage and offering practical insights. The framework's application demonstrates its effectiveness in enhancing project quality, efficiency, and collaboration, making it a valuable tool for educational and professional environments.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1013.html b/program/paper_w-eduvis-1013.html new file mode 100644 index 000000000..97982da1d --- /dev/null +++ b/program/paper_w-eduvis-1013.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: AdVizor: Using Visual Explanations to Guide Data-Driven Student Advising

AdVizor: Using Visual Explanations to Guide Data-Driven Student Advising

Riley Weagant - Ontario Tech University, Oshawa, Canada

Zixin Zhao - Ontario Tech University, Oshawa, Canada

Adam Badley - Ontario Tech University, Oshawa, Canada

Christopher Collins - Ontario Tech University, Oshawa, Canada

Room: To Be Announced

Abstract

Academic advising can positively impact struggling students' success. We developed AdVizor, a data-driven learning analytics tool for academic risk prediction for advisors. Our system is equipped with a random forest model for grade prediction probabilities uses a visualization dashboard to allows advisors to interpret model predictions. We evaluated our system in mock advising sessions with academic advisors and undergraduate students at our university. Results show that the system can easily integrate into the existing advising workflow, and visualizations of model outputs can be learned through short training sessions. AdVizor supports and complements the existing expertise of the advisor while helping to facilitate advisor-student discussion and analysis. Advisors found the system assisted them in guiding student course selection for the upcoming semester. It allowed them to guide students to prioritize the most critical and impactful courses. Both advisors and students perceived the system positively and were interested in using the system in the future. Our results encourage the development of intelligent advising systems in higher education, catered for advisors.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1015.html b/program/paper_w-eduvis-1015.html new file mode 100644 index 000000000..7bd055637 --- /dev/null +++ b/program/paper_w-eduvis-1015.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Exploring the Role of Visualization in Enhancing Computing Education: A Systematic Literature Review

Exploring the Role of Visualization in Enhancing Computing Education: A Systematic Literature Review

Naaz Sibia - University of Toronto, Toronto, Canada

Michael Liut - University of Toronto Mississauga, Mississauga, Canada

Carolina Nobre - University of Toronto, Toronto, Canada

Room: To Be Announced

Abstract

The integration of visualization in computing education has emerged as a promising strategy to enhance student understanding and engagement in complex computing concepts. Motivated by the need to explore effective teaching methods, this research systematically reviews the applications of visualization tools in computing education, aiming to identify gaps and opportunities for future research. We conducted a systematic literature review using papers from Semantic Scholar and Web of Science, and using a refined set of keywords to gather relevant studies. Our search yielded 288 results, which were systematically filtered to include 90 papers. Data extraction focused on publication details, research methods, key findings, future research suggestions, and research categories. Our review identified a diverse range of visualization tools and techniques used across different areas of computing education, including algorithms, programming, online learning, and problem-solving. The findings highlight the effectiveness of these tools in improving student engagement, understanding, and learning outcomes. However, there is a need for rigorous evaluations and the development of new models tailored to specific learning difficulties. By identifying effective visualization techniques and areas for further investigation, this review encourages the continued development and integration of visual tools in computing education to support the advancement of teaching methodologies

\ No newline at end of file diff --git a/program/paper_w-eduvis-1017.html b/program/paper_w-eduvis-1017.html new file mode 100644 index 000000000..a39ff294a --- /dev/null +++ b/program/paper_w-eduvis-1017.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Visualization Software: How to Select the Right Software for Teaching Visualization.

Visualization Software: How to Select the Right Software for Teaching Visualization.

Sanjog Ray - Indian institute of management indore, Indore, India

Room: To Be Announced

Abstract

The digitalisation of organisations has transformed the way organisations view data. All employees are expected to be data literate and managers are expected to make data-driven decisions [1]. The ability to analyse and visualize the data is a crucial skill set expected from every decision-maker. To help managers develop the skill of data visualization, business schools across the world offer courses in data visualization. From an educator’s perspective, one key decision that he/she must take while designing a visualization course for management students is the software tool to use in the course. Existing literature on data visualization in the scientific community is primarily focused on tools used by researchers or computer scientists ([3], [4]). In [5] the authors evaluate the landscape of commercially available visual analytics systems. In business-related publications like Harvard Business Review, the focus is more on selecting the right chart or on designing effective visualization ([6], [7]). There is a lack of literature to guide educators in teaching visualization to management students. This article attempts to guide educators teaching visualization to management students on how to select the appropriate software tool for their course.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1018.html b/program/paper_w-eduvis-1018.html new file mode 100644 index 000000000..6507988ff --- /dev/null +++ b/program/paper_w-eduvis-1018.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Teaching Information Visualization through Situated Design: Case Studies from the Classroom

Teaching Information Visualization through Situated Design: Case Studies from the Classroom

Doris Kosminsky - Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil

Renata Perim Lopes - Federal University of Rio de Janeiro, Rio de Janeiro, Brazil

Regina Reznik - UFRJ, RJ, Brazil. IBGE, RJ, Brazil

Room: To Be Announced

Abstract

In this article, we discuss an experience with design and situated learning in the Creative Data Visualization course, part of the Visual Communication Design undergraduate program at the Federal University of Rio de Janeiro, a free, public Brazilian university that, thanks to affirmative action policies, has become more inclusive over the years. We begin with a brief introduction to the terms Situated Knowledge, coined by Donna Haraway, Situated Design, based on the former concept, and Situated Learning. We then examine the similarities and differences between these notions and the term Situated Visualization to present a model for the concept of Situated Learning in Information Visualization. Following this foundation, we describe the applied methodology, emphasizing the importance of integrating real-world contexts into students’ projects. As a case study, we present three student projects produced as final assignments for the course. Through this article, we aim to underscore the articulation of situated design concepts in information visualization activities and contribute to teaching and learning practices in this field, particularly within the Global South.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1019.html b/program/paper_w-eduvis-1019.html new file mode 100644 index 000000000..75760c3b5 --- /dev/null +++ b/program/paper_w-eduvis-1019.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Reflections on Teaching Data Visualization at the Journalism School

Reflections on Teaching Data Visualization at the Journalism School

Xingyu Lan - Fudan University, Shanghai, China

Room: To Be Announced

Abstract

The integration of data visualization in journalism has catalyzed the growth of data storytelling in recent years. Today, it is increasingly common for journalism schools to incorporate data visualization into their curricula. However, the approach to teaching data visualization in journalism schools can diverge significantly from that in computer science or design schools, influenced by the varied backgrounds of students and the distinct value systems inherent to these disciplines. This paper reviews my experience and reflections on teaching data visualization in a journalism school. First, I discuss the prominent characteristics of journalism education that pose challenges for course design and teaching. Then, I share firsthand teaching experiences related to each characteristic and recommend approaches for effective teaching.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1020.html b/program/paper_w-eduvis-1020.html new file mode 100644 index 000000000..dec24a382 --- /dev/null +++ b/program/paper_w-eduvis-1020.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Developing a Robust Cartography Curriculum to Train the Professional Cartographer

Developing a Robust Cartography Curriculum to Train the Professional Cartographer

Jonathan Nelson - University of Wisconsin-Madison, Madison, United States

P. William Limpisathian - University of Wisconsin-Madison, Madison, United States

Robert Roth - University of Wisconsin-Madison, Madison, United States

Room: To Be Announced

Abstract

In this paper, we discuss our experiences advancing a professional-oriented graduate program in Cartography & GIScience at the University of Wisconsin-Madison to account for fundamental shifts in conceptual framings, rapidly evolving mapping technologies, and diverse student needs. We focus our attention on considerations for the cartography curriculum given its relevance to (geo)visualization education and map literacy. We reflect on challenges associated with, and lessons learned from, developing a comprehensive and cohesive cartography curriculum across in-person and online learning modalities for a wide range of professional student audiences.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1026.html b/program/paper_w-eduvis-1026.html new file mode 100644 index 000000000..605fec5ea --- /dev/null +++ b/program/paper_w-eduvis-1026.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: What makes school visits to digital science centers successful?

What makes school visits to digital science centers successful?

Andreas Göransson - Linköping university, Norrköping, Sweden

Konrad J Schönborn - Linköping University, Norrköping, Sweden

Room: To Be Announced

Abstract

For over half a century, science centers have been key in communicating science, aiming to increase interest and curiosity in STEM, and promote lifelong learning. Science centers integrate interactive technologies like dome displays, touch tables, VR and AR for immersive learning. Visitors can explore complex phenomena, such as conducting a virtual autopsy. Also, the shift towards digitally interactive exhibits has expanded science centers beyond physical locations to virtual spaces, extending their reach into classrooms. Our investigation revealed several key factors for impactful school visits involving interactive data visualization such as full-dome movies, provide unique perspectives about vast and microscopic phenomena. Hands-on discovery allows pupils to manipulate and investigate data, leading to deeper engagement. Collaborative interaction fosters active learning through group participation. Additionally, clear curriculum connections ensure that visits are pedagogically meaningful. We propose a three-stage model for school visits. The "Experience" stage involves immersive visual experiences to spark interest. The "Engagement" stage builds on this by providing hands-on interaction with data visualization exhibits. The "Applicate" stage offers opportunities to apply and create using data visualization. A future goal of the model is to broaden STEM reach, enabling pupils to benefit from data visualization experiences even if they cannot visit centers.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1027.html b/program/paper_w-eduvis-1027.html new file mode 100644 index 000000000..c068f0bd9 --- /dev/null +++ b/program/paper_w-eduvis-1027.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: An Inductive Approach for Identification of Barriers to PCP Literacy

An Inductive Approach for Identification of Barriers to PCP Literacy

Chandana Srinivas - University of San Francisco, San Francisco, United States

Elif E. Firat - Cukurova University, Adana, Turkey

Robert S. Laramee - University of Nottingham, Nottingham, United Kingdom

Alark Joshi - University of San Francisco, San Francisco, United States

Room: To Be Announced

Abstract

Parallel coordinate plots (PCPs) are gaining popularity in data exploration, statistical analysis, predictive analysis along with for data-driven storytelling. In this paper, we present the results of a post-hoc analysis of a dataset from a PCP literacy intervention to identify barriers to PCP literacy. We analyzed question responses and inductively identified barriers to PCP literacy. We performed group coding on each individual response and identified new barriers to PCP literacy. Based on our analysis, we present a extended and enhanced list of barriers to PCP literacy. Our findings have implications towards educational interventions targeting PCP literacy and can provide an approach for students to learn about PCPs through active learning.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1028.html b/program/paper_w-eduvis-1028.html new file mode 100644 index 000000000..960a76885 --- /dev/null +++ b/program/paper_w-eduvis-1028.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Space to Teach: Content-Rich Canvases for Visually-Intensive Education

Space to Teach: Content-Rich Canvases for Visually-Intensive Education

Jesse Harden - Virginia Tech, Blacksburg, United States

Nurit Kirshenbaum - University of Hawaii at Manoa, Honolulu, United States

Roderick S Tabalba Jr. - University of Hawaii at Manoa, Honolulu, United States

Ryan Theriot - University of Hawaii at Manoa, Honolulu, United States

Michael L. Rogers - The University of Hawai'i at Mānoa, Honolulu, United States

Mahdi Belcaid - University of Hawaii at Manoa, Honolulu, United States

Chris North - Virginia Tech, Blacksburg, United States

Luc Renambot - University of Illinois at Chicago, Chicago, United States

Lance Long - University of Illinois at Chicago, Chicago, United States

Andrew E Johnson - University of Illinois Chicago, Chicago, United States

Jason Leigh - University of Hawaii at Manoa, Honolulu, United States

Room: To Be Announced

Abstract

With the decreasing cost of consumer display technologies making it easier for universities to have larger displays in classrooms, and the ubiquitous use of online tools such as collaborative whiteboards for remote learning during the COVID-19 pandemic, combining the two can be useful in higher education. This is especially true in visually intensive classes, such as data visualization courses, that can benefit from additional "space to teach," coined after the "space to think" sense-making idiom. In this paper, we reflect on our approach to using SAGE3, a collaborative whiteboard with advanced features, in higher education to teach visually intensive classes, provide examples of activities from our own visually-intensive courses, and present student feedback. We gather our observations into usage patterns for using content-rich canvases in education.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1029.html b/program/paper_w-eduvis-1029.html new file mode 100644 index 000000000..cca01c4af --- /dev/null +++ b/program/paper_w-eduvis-1029.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Engaging Data-Art: Conducting a Public Hands-On Workshop

Engaging Data-Art: Conducting a Public Hands-On Workshop

Jonathan C Roberts - Bangor University, Bangor, United Kingdom

Room: To Be Announced

Abstract

Data-art blends visualisation, data science, and artistic expression. It allows people to transform information and data into exciting and interesting visual narratives. Hosting a public data-art hands-on workshop enables participants to engage with data and learn fundamental visualisation techniques. However, being a public event, it presents a range of challenges. We outline our approach to organising and conducting a public workshop, that caters to a wide age range, from children to adults. We divide the tutorial into three sections, focusing on data, sketching skills and visualisation. We place emphasis on public engagement, and ensure that participants have fun while learning new skills.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1030.html b/program/paper_w-eduvis-1030.html new file mode 100644 index 000000000..d5e2753f0 --- /dev/null +++ b/program/paper_w-eduvis-1030.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: TellUs – Leveraging the power of LLMs with visualization to benefit science centers.

TellUs – Leveraging the power of LLMs with visualization to benefit science centers.

Lonni Besançon - Linköping University, Norrköping, Sweden

Mathis Brossier - LiU Linköping Universitet, Norrköping, Sweden

Omar Mena - King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Erik Sundén - Linköping University, Norrköping, Sweden

Andreas Göransson - Linköping university, Norrköping, Sweden

Anders Ynnerman - Linköping University, Norrköping, Sweden

Konrad J Schönborn - Linköping University, Norrköping, Sweden

Room: To Be Announced

Abstract

We propose to leverage the recent development in Large Language Models, in combination to data visualization software and devices in science centers and schools in order to foster more personalized learning experiences. The main goal with our endeavour is to provide to pupils and visitors the same experience they would get with a professional facilitator when interacting with data visualizations of complex scientific phenomena. We describe the results from our early prototypes and the intended implementation and testing of our idea.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1031.html b/program/paper_w-eduvis-1031.html new file mode 100644 index 000000000..804402ac4 --- /dev/null +++ b/program/paper_w-eduvis-1031.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: What Can Educational Science Offer Visualization? A Reflective Essay

What Can Educational Science Offer Visualization? A Reflective Essay

Konrad J Schönborn - Linköping University, Norrköping, Sweden

Lonni Besançon - Linköping University, Norrköping, Sweden

Room: To Be Announced

Abstract

In this reflective essay, we explore how educational science can be relevant for visualization research, addressing beneficial intersections between the two communities. While visualization has become integral to various areas, including education, our own ongoing collaboration has induced reflections and discussions we believe could benefit visualization research. In particular, we identify five key perspectives: surpassing traditional evaluation metrics by incorporating established educational measures; defining constructs based on existing learning and educational research frameworks; applying established cognitive theories to understand interpretation and interaction with visualizations; establishing uniform terminology across disciplines; and, fostering interdisciplinary convergence. We argue that by integrating educational research constructs, methodologies, and theories, visualization research can further pursue ecological validity and thereby improve the design and evaluation of visual tools. Our essay emphasizes the potential of intensified and systematic collaborations between educational scientists and visualization researchers to advance both fields, and in doing so craft visualization systems that support comprehension, retention, transfer, and critical thinking. We argue that this reflective essay serves as a first point of departure for initiating dialogue that, we hope, could help further connect educational science and visualization, by proposing future empirical studies that take advantage of interdisciplinary approaches of mutual gain to both communities.

\ No newline at end of file diff --git a/program/paper_w-energyvis-1762.html b/program/paper_w-energyvis-1762.html new file mode 100644 index 000000000..dbee8b969 --- /dev/null +++ b/program/paper_w-energyvis-1762.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri

Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri

Baldwin Nsonga - Institute of Computer Science, Leipzig University, Leipzig, Germany

Andy S Berres - National Renewable Energy Laboratory, Golden, United States

Robert Jeffers - National Renewable Energy Laboratory, Golden, United States

Caitlyn Clark - National Renewable Energy Laboratory, Golden, United States

Hans Hagen - University of Kaiserslautern, Kaiserslautern, Germany

Gerik Scheuermann - Leipzig University, Leipzig, Germany

Room: To Be Announced

Abstract

Weather can have a significant impact on the power grid. Heat and cold waves lead to increased energy use as customers cool or heat their space, while simultaneously hampering energy production as the environment deviates from ideal operating conditions. Extreme heat has previously melted power cables, while extreme cold can cause vital parts of the energy infrastructure to freeze. Utilities have reserves to compensate for the additional energy use, but in extreme cases which fall outside the forecast energy demand, the impact on the power grid can be severe. In this paper, we present an interactive tool to explore the relationship between weather and power outages. We demonstrate its use with the example of Winter Storm Uri’s impact on Texas in February 2021.

\ No newline at end of file diff --git a/program/paper_w-energyvis-2646.html b/program/paper_w-energyvis-2646.html new file mode 100644 index 000000000..e29d1d3a3 --- /dev/null +++ b/program/paper_w-energyvis-2646.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Architecture for Web-Based Visualization of Large-Scale Energy Domains

Architecture for Web-Based Visualization of Large-Scale Energy Domains

Graham Johnson - National Renewable Energy Lab, Golden, United States

Sam Molnar - National Renewable Energy Lab, Golden, United States

Nicholas Brunhart-Lupo - National Renewable Energy Laboratory, Golden, United States

Kenny Gruchalla - National Renewable Energy Lab, Golden, United States

Room: To Be Announced

Abstract

With the growing penetration of inverter-based distributed energy resources and increased loads through electrification, power systems analyses are becoming more important and more complex. Moreover, these analyses increasingly involve the combination of interconnected energy domains with data that are spatially and temporally increasing in scale by orders of magnitude, surpassing the capabilities of many existing analysis and decision-support systems. We present the architectural design, development, and application of a high-resolution web-based visualization environment capable of cross-domain analysis of tens of millions of energy assets, focusing on scalability and performance. Our system supports the exploration, navigation, and analysis of large data from diverse domains such as electrical transmission and distribution systems, mobility and electric vehicle charging networks, communications networks, cyber assets, and other supporting infrastructure. We evaluate this system across multiple use cases, describing the capabilities and limitations of a web-based approach for high-resolution energy system visualizations.

\ No newline at end of file diff --git a/program/paper_w-energyvis-2743.html b/program/paper_w-energyvis-2743.html new file mode 100644 index 000000000..83bc9ef49 --- /dev/null +++ b/program/paper_w-energyvis-2743.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Pathways Explorer: Interactive Visualization of Climate Transition Scenarios

Pathways Explorer: Interactive Visualization of Climate Transition Scenarios

François Lévesque - Kashika Studio, Montreal, Canada

Louis Beaumier - Polytechnique Montreal, Montreal, Canada

Thomas Hurtut - Polytechnique Montreal, Montreal, Canada

Room: To Be Announced

Abstract

In the pursuit of achieving net-zero greenhouse gas emissions by 2050, policymakers and researchers require sophisticated tools to explore and compare various climate transition scenarios. This paper introduces the Pathways Explorer, an innovative visualization tool designed to facilitate these comparisons by providing an interactive platform that allows users to select, view, and dissect multiple pathways towards sustainability. Developed in collaboration with the “Institut de l’énergie Trottier” (IET), this tool leverages a technoeconomic optimization model to project the energy transformation needed under different constraints and assumptions. We detail the design process that guided the development of the Pathways Explorer, focusing on user-centered design challenges and requirements. A case study is presented to demonstrate how the tool has been utilized by stakeholders to make informed decisions, highlighting its impact and effectiveness. The Pathways Explorer not only enhances understanding of complex climate data but also supports strategic planning by providing clear, comparative visualizations of potential future scenarios.

\ No newline at end of file diff --git a/program/paper_w-energyvis-2845.html b/program/paper_w-energyvis-2845.html new file mode 100644 index 000000000..99169e5bf --- /dev/null +++ b/program/paper_w-energyvis-2845.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Challenges in Data Integration, Monitoring, and Exploration of Methane Emissions: The Role of Data Analysis and Visualization

Challenges in Data Integration, Monitoring, and Exploration of Methane Emissions: The Role of Data Analysis and Visualization

Parisa Masnadi Khiabani - University of Oklahoma, Norman, United States

Gopichandh Danala - University of Oklahoma, Norman, United States

Wolfgang Jentner - University of Oklahoma, Norman, United States

David Ebert - University of Oklahoma, Oklahoma, United States

Room: To Be Announced

Abstract

Methane (CH4) leakage monitoring is crucial for environmental protection and regulatory compliance, particularly in the oil and gas industries. Reducing CH4 emissions helps advance green energy by converting it into a valuable energy source through innovative capture technologies. A real-time continuous monitoring system (CMS) is necessary to detect fugitive and intermittent emissions and provide actionable insights. Integrating spatiotemporal data from satellites, airborne sensors, and ground sensors with inventory data and the weather research and forecasting (WRF) model creates a comprehensive dataset, making CMS feasible but posing significant challenges. These challenges include data alignment and fusion, managing heterogeneity, handling missing values, ensuring resolution integrity, and maintaining geometric and radiometric accuracy. This study outlines the procedure for methane leakage detection, addressing challenges at each step and offering solutions through machine learning and data analysis. It further details how visual analytics can be implemented to improve the effectiveness of the various aspects of emission monitoring.

\ No newline at end of file diff --git a/program/paper_w-energyvis-3496.html b/program/paper_w-energyvis-3496.html new file mode 100644 index 000000000..6bcbed395 --- /dev/null +++ b/program/paper_w-energyvis-3496.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Operator-Centered Design of a Nodal Loadability Network Visualization

Operator-Centered Design of a Nodal Loadability Network Visualization

David Marino - Hitachi Energy Research, Montreal, Canada

Maxwell Keleher - Carleton University, Ottawa, Canada

Krzysztof Chmielowiec - Hitachi Energy Research, Krakow, Poland

Antony Hilliard - Hitachi Energy Research, Montreal, Canada

Pawel Dawidowski - Hitachi Energy Research, Krakow, Poland

Room: To Be Announced

Abstract

Transmission System Operators (TSO) often need to integrate multiple sources of information to make decisions in real time. In cases where a single power line goes offline, due to a natural event or scheduled outage, there typically will be a contingency plan that the TSO may utilize to mitigate the situation. In cases where two or more power lines go offline, this contingency plan is no longer valid, and they must re-prepare and reason about the network in real time. A key network property that must be balanced is loadability--the range of permissible voltage levels for a specific bus (or node), understood as a function of power and its active (P) and reactive (Q) components. Loadability provides information of how much more demand a specific node can handle, before system became unstable. To increase loadability, the TSO can potentially make control actions that raise or lower P or Q, which results in change the voltage levels required to be within permissible limits. While many methods exist to calculate loadability and represent loadability to end users, there has been little focus on tailoring loadability visualizations to the unique needs of TSOs. In this paper we involve operations domain experts in a human centered design process to prototype two new loadability visualizations for TSOs. We contribute a design paper that yields: (1) a working model of the operator's decision making process, (2) example artifacts of the two data visualization techniques, and (3) a critical qualitative expert review of our designs.

\ No newline at end of file diff --git a/program/paper_w-energyvis-4332.html b/program/paper_w-energyvis-4332.html new file mode 100644 index 000000000..3b7c509e1 --- /dev/null +++ b/program/paper_w-energyvis-4332.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Situated Visualization of Photovoltaic Module Performance for Workforce Development

Situated Visualization of Photovoltaic Module Performance for Workforce Development

Nicholas Brunhart-Lupo - National Renewable Energy Laboratory, Golden, United States

Kenny Gruchalla - National Renewable Energy Lab, Golden, United States

Laurie Williams - Fort Lewis College, Durango, United States

Steve Ellis - Fort Lewis College, Durango, United States

Room: To Be Announced

Abstract

The rapid growth of the solar energy industry requires advanced educational tools to train the next generation of engineers and technicians. We present a novel system for situated visualization of photovoltaic (PV) module performance, leveraging a combination of PV simulation, sun-sky position, and head-mounted augmented reality (AR). Our system is guided by four principles of development: simplicity, adaptability, collaboration, and maintainability, realized in six components. Users interactively manipulate a physical module's orientation and shading referents with immediate feedback on the module's performance.

\ No newline at end of file diff --git a/program/paper_w-energyvis-6102.html b/program/paper_w-energyvis-6102.html new file mode 100644 index 000000000..fa30de4fd --- /dev/null +++ b/program/paper_w-energyvis-6102.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: CPIE: A Spatiotemporal Visual Analytic Tool to Explore the Impact of Coal Pollution

CPIE: A Spatiotemporal Visual Analytic Tool to Explore the Impact of Coal Pollution

Sichen Jin - Georgia Institute of Technology, Atlanta, United States

Lucas Henneman - George Mason University, Fairfax, United States

Jessica Roberts - Georgia Institute of Technology, Atlanta, United States

Room: To Be Announced

Abstract

This paper introduces CPIE (Coal Pollution Impact Explorer), a spatiotemporal visual analytic tool developed for interactive visualization of coal pollution impacts. CPIE visualizes electricity-generating units (EGUs) and their contributions to statewide Medicare deaths related to coal PM2.5 emissions. The tool is designed to make scientific findings on the impacts of coal pollution more accessible to the general public and to raise awareness of the associated health risks. We present three use cases for CPIE: 1) the overall spatial distribution of all 480 facilities in the United States, their statewide impact on excess deaths, and the overall decreasing trend in deaths associated with coal pollution from 1999 to 2020; 2) the influence of pollution transport, where most deaths associated with the facilities located within the same state and neighboring states but some deaths occur far away; and 3) the effectiveness of intervention regulations, such as installing emissions control devices and shutting down coal facilities, in significantly reducing the number of deaths associated with coal pollution.

\ No newline at end of file diff --git a/program/paper_w-energyvis-9750.html b/program/paper_w-energyvis-9750.html new file mode 100644 index 000000000..a838322d3 --- /dev/null +++ b/program/paper_w-energyvis-9750.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: ChatGrid: Power Grid Visualization Empowered by a Large Language Model

ChatGrid: Power Grid Visualization Empowered by a Large Language Model

Sichen Jin - Georgia Institute of Technology, Atlanta, United States

Shrirang Abhyankar - Pacific Northwest National Laboratory, Richland, United States

Room: To Be Announced

Abstract

This paper presents a novel open system, ChatGrid, for easy, intuitive, and interactive geospatial visualization of large-scale transmission networks. ChatGrid uses state-of-the-art techniques for geospatial visualization of large networks, including 2.5D map views, animated flows, hierarchical and level-based filtering and aggregation to provide visual information in an easy, cognitive manner. The highlight of ChatGrid is a natural language query based interface powered by a large language model (ChatGPT) that offers a natural and flexible interactive experience whereby users can ask questions and ChatGrid provides responses both in text and visually. This paper discusses the architecture, implementation, design decisions, and usage of large language models for ChatGrid.

\ No newline at end of file diff --git a/program/paper_w-future-1007.html b/program/paper_w-future-1007.html new file mode 100644 index 000000000..54902ab93 --- /dev/null +++ b/program/paper_w-future-1007.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Rain Gauge: Exploring the Design and Sustainability of 3D Printed Clay Physicalizations

Rain Gauge: Exploring the Design and Sustainability of 3D Printed Clay Physicalizations

Bridger Herman - University of Minnesota, Minneapolis, United States

Jessica Rossi-Mastracci - University of Minnesota, Minneapolis, United States

Heather Willy - University of Minnesota, Minneapolis, United States

Molly Reichert - University of Minnesota, Minneapolis, United States

Daniel F. Keefe - University of Minnesota, Minneapolis, United States

Room: To Be Announced

Abstract

Data physicalizations are a time-tested practice for visualizing data, but the sustainability challenges of current physicalization practices have only recently been explored; for example, the usage of carbon-intensive, non-renewable materials like plastic and metal. This work explores clay physicalizations as an approach to these challenges. Using a three-stage process, we investigate the design and sustainability of clay 3D printed physicalizations: 1) exploring the properties and constraints of clay when extruded through a 3D printer, 2) testing a variety of data encodings that work within the constraints, and 3) introducing Rain Gauge, a clay physicalization exploring climate effects on climate data with an impermanent material. Throughout our process, we investigate the material circularity of clay-based digital fabrication by reclaiming and reusing the clay stock in each stage. Finally, we reflect on the implications of ceramic 3D printing for data physicalization through the lenses of practicality and sustainability.

\ No newline at end of file diff --git a/program/paper_w-future-1008.html b/program/paper_w-future-1008.html new file mode 100644 index 000000000..fe5af677f --- /dev/null +++ b/program/paper_w-future-1008.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: (Almost) All Data is Absent Data

(Almost) All Data is Absent Data

Karly Ross - University of Calgary, Calgary, Canada

Pratim Sengupta - University of Calgary, Calgary, Canada

Wesley Willett - University of Calgary, Calgary, Canada

Room: To Be Announced

Abstract

We explain our model of data-in-a-void and contrast it with the idea of data-voids to explore how the different framings impact our thinking on sustainability. This contrast supports our assertion that how we think about the data that we work with for visualization design impacts the direction of our thinking and our work. To show this we describe how we view the concept of data-in-a-void as different from that of data-voids. Then we provide two examples, one that relates to existing data about bicycle mobility, and one about non-data for local food production. In the discussion, we then untangle and outline how our thinking about data for sustainability is impacted and influenced by the data-in-a-void model.

\ No newline at end of file diff --git a/program/paper_w-future-1011.html b/program/paper_w-future-1011.html new file mode 100644 index 000000000..8db6dc79a --- /dev/null +++ b/program/paper_w-future-1011.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Renewable Energy Data Visualization: A study with Open Data

Renewable Energy Data Visualization: A study with Open Data

Gustavo Santos Silva - Faculdade Nova Roma, Recife, Brazil

Artur Vinícius Lima Silva - Faculdade Nova Roma, Recife, Brazil

Lucas Pereira Souza - Faculdade Nova Roma, Recife, Brazil

Adrian Lauzid - Faculdade Nova Roma, Recife, Brazil

Davi Maia - Universidade Federal de Pernambuco, Recife, Brazil

Room: To Be Announced

Abstract

This study explores energy issues across various nations, focusing on sustainable energy availability and accessibility. Representatives from all continents were selected based on their HDI values. Data from Kaggle, spanning 2000-2020, was analyzed using Python to address questions on electricity access, renewable energy generation, and fossil fuel consumption. The research employed statistical and data visualization techniques to reveal trends and disparities. Findings underscore the importance of Python and Kaggle in data analysis. The study suggests expanding datasets and incorporating predictive modeling for future research to enhance understanding and decision-making in energy policies.

\ No newline at end of file diff --git a/program/paper_w-future-1012.html b/program/paper_w-future-1012.html new file mode 100644 index 000000000..300708910 --- /dev/null +++ b/program/paper_w-future-1012.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Reimagining Data Visualization to Address Sustainability Goals

Reimagining Data Visualization to Address Sustainability Goals

Narges Mahyar - University of Massachusetts Amherst, Amherst, United States

Room: To Be Announced

Abstract

Information visualization holds significant potential to support sustainability goals such as environmental stewardship, and climate resilience by transforming complex data into accessible visual formats that enhance public understanding of complex climate change data and drive actionable insights. While the field has predominantly focused on analytical orientation of visualization, challenging traditional visualization techniques and goals, through ``critical visualization'' research expands existing assumptions and conventions in the field. In this paper, I explore how reimagining overlooked aspects of data visualization—such as engagement, emotional resonance, communication, and community empowerment—can contribute to achieving sustainability objectives. I argue that by focusing on inclusive data visualization that promotes clarity, understandability, and public participation, we can make complex data more relatable and actionable, fostering broader connections and mobilizing collective action on critical issues like climate change. Moreover, I discuss the role of emotional receptivity in environmental data communication, stressing the need for visualizations that respect diverse cultural perspectives and emotional responses to achieve impactful outcomes. Drawing on insights from a decade of research in public participation and community engagement, I aim to highlight how data visualization can democratize data access and increase public involvement in order to contribute to a more sustainable and resilient future.

\ No newline at end of file diff --git a/program/paper_w-future-1013.html b/program/paper_w-future-1013.html new file mode 100644 index 000000000..c02354bee --- /dev/null +++ b/program/paper_w-future-1013.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Visual and Data Journalism as Tools for Fighting Climate Change

Visual and Data Journalism as Tools for Fighting Climate Change

Emilly Brito - Universidade Federal de Pernambuco, Recife, Brazil

Nivan Ferreira - Universidade Federal de Pernambuco, Recife, Brazil

Room: To Be Announced

Abstract

This position paper discusses the role of data visualizations in journalism based on new areas of study such as visual journalism and data journalism, using examples from the coverage of the catastrophe that occurred in 2024 in Rio Grande do Sul, Brazil, affecting over 2 million people. This case served as a warning to the country about the importance of the climate change agenda and its consequences. The paper includes a literature review in the fields of journalism, data visualization, and psychology to explore the importance of data visualization in combating misinformation and in producing more reliable journalism as tool for fighting climate change

\ No newline at end of file diff --git a/program/paper_w-nlviz-1004.html b/program/paper_w-nlviz-1004.html new file mode 100644 index 000000000..73cb52e3d --- /dev/null +++ b/program/paper_w-nlviz-1004.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Steering LLM Summarization with Visual Workspaces for Sensemaking

Steering LLM Summarization with Visual Workspaces for Sensemaking

Xuxin Tang - Computer Science Department, Blacksburg, United States

Eric Krokos - Dod, Laurel, United States

Kirsten Whitley - Department of Defense, College Park, United States

Can Liu - City University of Hong Kong, Hong Kong, China

Naren Ramakrishnan - Virginia Tech, Blacksburg, United States

Chris North - Virginia Tech, Blacksburg, United States

Room: To Be Announced

Abstract

Large Language Models (LLMs) have been widely applied in summarization due to their speedy and high-quality text generation. Summarization for sensemaking involves information compression and insight extraction. Human guidance in sensemaking tasks can prioritize and cluster relevant information for LLMs. However, users must translate their cognitive thinking into natural language to communicate with LLMs. Can we use more readable and operable visual representations to guide the summarization process for sensemaking? Therefore, we propose introducing an intermediate step--a schematic visual workspace for human sensemaking--before the LLM generation to steer and refine the summarization process. We conduct a series of proof-of-concept experiments to investigate the potential for enhancing the summarization by GPT-4 through visual workspaces. Leveraging a textual sensemaking dataset with a ground truth summary, we evaluate the impact of a human-generated visual workspace on LLM-generated summarization of the dataset and assess the effectiveness of space-steered summarization. We categorize several types of extractable information from typical human workspaces that can be injected into engineered prompts to steer the LLM summarization. The results demonstrate how such workspaces can help align an LLM with the ground truth, leading to more accurate summarization results than without the workspaces.

\ No newline at end of file diff --git a/program/paper_w-nlviz-1007.html b/program/paper_w-nlviz-1007.html new file mode 100644 index 000000000..a5616fc14 --- /dev/null +++ b/program/paper_w-nlviz-1007.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Towards Real-Time Speech Segmentation for Glanceable Conversation Visualization

Towards Real-Time Speech Segmentation for Glanceable Conversation Visualization

Shanna Li Ching Hollingworth - University of Calgary, Calgary, Canada

Wesley Willett - University of Calgary, Calgary, Canada

Room: To Be Announced

Abstract

We explore the use of segmentation and summarization methods for the generation of real-time conversation topic timelines, in the context of glanceable Augmented Reality (AR) visualization. Conversation timelines may serve to summarize and contextualize conversations as they are happening, helping to keep conversations on track. Because dialogue and conversations are broad and unpredictable by nature, and our processing is being done in real-time, not all relevant information may be present in the text at the time it is processed. Thus, we present considerations and challenges which may not be as prevalent in traditional implementations of topic classification and dialogue segmentation. Furthermore, we discuss how AR visualization requirements and design practices require an additional layer of decision making, which must be factored directly into the text processing algorithms. We explore three segmentation strategies -- using dialogue segmentation based on the text of the entire conversation, segmenting on 1-minute intervals, and segmenting on 10-second intervals -- and discuss our results.

\ No newline at end of file diff --git a/program/paper_w-nlviz-1008.html b/program/paper_w-nlviz-1008.html new file mode 100644 index 000000000..a0628bcf0 --- /dev/null +++ b/program/paper_w-nlviz-1008.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: vitaLITy 2: Reviewing Academic Literature Using Large Language Models

vitaLITy 2: Reviewing Academic Literature Using Large Language Models

Hongye An - University of Nottingham, Nottingham, United Kingdom

Arpit Narechania - Georgia Institute of Technology, Atlanta, United States

Kai Xu - University of Nottingham, Nottingham, United Kingdom

Room: To Be Announced

Abstract

Academic literature reviews have traditionally relied on techniques such as keyword searches and accumulation of relevant back-references, using databases like Google Scholar or IEEEXplore. However, both the precision and accuracy of these search techniques is limited by the presence or absence of specific keywords, making literature review akin to searching for needles in a haystack. We present vitaLITy 2, a solution that uses a Large Language Model or LLM-based approach to identify semantically relevant literature in a textual embedding space. We include a corpus of 66,692 papers from 1970-2023 which are searchable through text embeddings created by three language models. vitaLITy 2 contributes a novel Retrieval Augmented Generation (RAG) architecture and can be interacted with through an LLM with augmented prompts, including summarization of a collection of papers. vitaLITy 2 also provides a chat interface that allow users to perform complex queries without learning any new programming language. This also enables users to take advantage of the knowledge captured in the LLM from its enormous training corpus. Finally, we demonstrate the applicability of vitaLITy 2 through two usage scenarios.

\ No newline at end of file diff --git a/program/paper_w-nlviz-1009.html b/program/paper_w-nlviz-1009.html new file mode 100644 index 000000000..e57e224d4 --- /dev/null +++ b/program/paper_w-nlviz-1009.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: “Show Me What’s Wrong!”: Combining Charts and Text to Guide Data Analysis

“Show Me What’s Wrong!”: Combining Charts and Text to Guide Data Analysis

Beatriz Feliciano - Feedzai, Lisbon, Portugal

Rita Costa - Feedzai, Lisbon, Portugal

Jean Alves - Feedzai, Porto, Portugal

Javier Liébana - Feedzai, Madrid, Spain

Diogo Ramalho Duarte - Feedzai, Lisbon, Portugal

Pedro Bizarro - Feedzai, Lisbon, Portugal

Room: To Be Announced

Abstract

Analyzing and finding anomalies in multi-dimensional datasets is a cumbersome but vital task across different domains. In the context of financial fraud detection, analysts must quickly identify suspicious activity among transactional data. This is an iterative process made of complex exploratory tasks such as recognizing patterns, grouping, and comparing. To mitigate the information overload inherent to these steps, we present a tool combining automated information highlights, Large Language Model generated textual insights, and visual analytics, facilitating exploration at different levels of detail. We perform a segmentation of the data per analysis area and visually represent each one, making use of automated visual cues to signal which require more attention. Upon user selection of an area, our system provides textual and graphical summaries. The text, acting as a link between the high-level and detailed views of the chosen segment, allows for a quick understanding of relevant details. A thorough exploration of the data comprising the selection can be done through graphical representations. The feedback gathered in a study performed with seven domain experts suggests our tool effectively supports and guides exploratory analysis, easing the identification of suspicious information.

\ No newline at end of file diff --git a/program/paper_w-nlviz-1010.html b/program/paper_w-nlviz-1010.html new file mode 100644 index 000000000..0df9a5ad6 --- /dev/null +++ b/program/paper_w-nlviz-1010.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings

Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings

Wei Liu - Computer Science, Virginia Tech, Blacksburg, United States

Chris North - Virginia Tech, Blacksburg, United States

Rebecca Faust - Tulane University, New Orleans, United States

Room: To Be Announced

Abstract

Dimension reduction (DR) can transform high-dimensional text embeddings into a 2D visual projection facilitating the exploration of document similarities. However, the projection often lacks connection to the text semantics, due to the opaque nature of text embeddings and non-linear dimension reductions. To address these problems, we propose a gradient-based method for visualizing the spatial semantics of dimensionally reduced text embeddings. This method employs gradients to assess the sensitivity of the projected documents with respect to the underlying words. The method can be applied to existing DR algorithms and text embedding models. Using these gradients, we designed a visualization system that incorporates spatial word clouds into the document projection space to illustrate the impactful text features. We further present three usage scenarios that demonstrate the practical applications of our system to facilitate the discovery and interpretation of underlying semantics in text projections.

\ No newline at end of file diff --git a/program/paper_w-nlviz-1011.html b/program/paper_w-nlviz-1011.html new file mode 100644 index 000000000..c236b16f4 --- /dev/null +++ b/program/paper_w-nlviz-1011.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models

Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models

Subham Sah - UNC Charlotte, Charlotte, United States

Rishab Mitra - Georgia Institute of Technology, Atlanta, United States

Arpit Narechania - Georgia Institute of Technology, Atlanta, United States

Alex Endert - Georgia Institute of Technology, Atlanta, United States

John Stasko - Georgia Institute of Technology, Atlanta, United States

Wenwen Dou - UNC Charlotte, Charlotte, United States

Room: To Be Announced

Abstract

Recently, large language models (LLMs) have shown great promise in translating natural language (NL) queries into visualizations, but their “black-box” nature often limits explainability and debuggability. In response, we present a comprehensive text prompt that, given a tabular dataset and an NL query about the dataset, generates an analytic specification including (detected) data attributes, (inferred) analytic tasks, and (recommended) visualizations. This specification captures key aspects of the query translation process, affording both explainability and debuggability. For instance, it provides mappings from the detected entities to the corresponding phrases in the input query, as well as the specific visual design principles that determined the visualization recommendations. Moreover, unlike prior LLM-based approaches, our prompt supports conversational interaction and ambiguity detection capabilities. In this paper, we detail the iterative process of curating our prompt, present a preliminary performance evaluation using GPT-4, and discuss the strengths and limitations of LLMs at various stages of query translation.

\ No newline at end of file diff --git a/program/paper_w-nlviz-1016.html b/program/paper_w-nlviz-1016.html new file mode 100644 index 000000000..6316dfe7b --- /dev/null +++ b/program/paper_w-nlviz-1016.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Towards Inline Natural Language Authoring for Word-Scale Visualizations

Towards Inline Natural Language Authoring for Word-Scale Visualizations

Paige So'Brien - University of Calgary, Calgary, Canada

Wesley Willett - University of Calgary, Calgary, Canada

Room: To Be Announced

Abstract

We explore how natural language authoring with large language models (LLMs) can support the inline authoring of word-scale visualizations (WSVs). While word-scale visualizations that live alongside and within document text can support rich integration of data into written narratives and communication, these small visualizations have typically been challenging to author. We explore how modern LLMs---which are able to generate diverse visualization designs based on simple natural language descriptions---might allow authors to specify and insert new visualizations inline as they write text. Drawing on our experiences with an initial prototype built using GPT-4, we highlight the expressive potential of inline natural language visualization authoring and identify opportunities for further research.

\ No newline at end of file diff --git a/program/paper_w-nlviz-1019.html b/program/paper_w-nlviz-1019.html new file mode 100644 index 000000000..288c329e6 --- /dev/null +++ b/program/paper_w-nlviz-1019.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: iToT: An Interactive System for Customized Tree-of-Thought Generation

iToT: An Interactive System for Customized Tree-of-Thought Generation

Alan David Boyle - ETHZ, Zurich, Switzerland

Isha Gupta - ETH Zürich, Zürich, Switzerland

Sebastian Hönig - ETH Zürich, Zürich, Switzerland

Lukas Mautner - ETH Zürich, Zürich, Switzerland

Kenza Amara - ETH Zürich, Zürich, Switzerland

Furui Cheng - ETH Zürich, Zürich, Switzerland

Mennatallah El-Assady - ETH Zürich, Zürich, Switzerland

Room: To Be Announced

Abstract

As language models have become increasingly successful at a wide array of tasks, different prompt engineering methods have been developed alongside them in order to adapt these models to new tasks. One of them is Tree-of-Thoughts (ToT), a prompting strategy and framework for language model inference and problem-solving. It allows the model to explore multiple solution paths and select the best course of action, producing a tree-like structure of intermediate steps (i.e., thoughts). This method was shown to be effective for several problem types. However, the official implementation has a high barrier to usage as it requires setup overhead and incorporates task-specific problem templates which are difficult to generalize to new problem types. It also does not allow user interaction to improve or suggest new thoughts. We introduce iToT (interactive Tree-of- Thoughts), a generalized and interactive Tree of Thought prompting system. iToT allows users to explore each step of the model’s problem-solving process as well as to correct and extend the model’s thoughts. iToT revolves around a visual interface that facilitates simple and generic ToT usage and transparentizes the problem-solving process to users. This facilitates a better understanding of which thoughts and considerations lead to the model’s final decision. Through two case studies, we demonstrate the usefulness of iToT in different human-LLM co-writing tasks.

\ No newline at end of file diff --git a/program/paper_w-nlviz-1020.html b/program/paper_w-nlviz-1020.html new file mode 100644 index 000000000..d7ca8409a --- /dev/null +++ b/program/paper_w-nlviz-1020.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Strategic management analysis: from data to strategy diagram by LLM

Strategic management analysis: from data to strategy diagram by LLM

Richard Brath - Uncharted Software, Toronto, Canada

Adam James Bradley - Uncharted Software, Toronto, Canada

David Jonker - Uncharted Software, Toronto, Canada

Room: To Be Announced

Abstract

Strategy management analyses are created by business consultants with common analysis frameworks (i.e. comparative analyses) and associated diagrams. We show these can be largely constructed using LLMs, starting with the extraction of insights from data, organization of those insights according to a strategy management framework, and then depiction in the typical strategy management diagram for that framework (static textual visualizations). We discuss caveats and future directions to generalize for broader uses.

\ No newline at end of file diff --git a/program/paper_w-nlviz-1021.html b/program/paper_w-nlviz-1021.html new file mode 100644 index 000000000..14c069051 --- /dev/null +++ b/program/paper_w-nlviz-1021.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: A Preliminary Roadmap for LLMs as Visual Data Analysis Assistants

A Preliminary Roadmap for LLMs as Visual Data Analysis Assistants

Harry Li - MIT Lincoln Laboratory, Lexington, United States

Gabriel Appleby - Tufts University, Medford, United States

Ashley Suh - MIT Lincoln Laboratory, Lexington, United States

Room: To Be Announced

Abstract

We present a mixed-methods study to explore how large language models (LLMs) can assist users in the visual exploration and analysis of complex data structures, using knowledge graphs (KGs) as a baseline. We surveyed and interviewed 20 professionals who regularly work with LLMs with the goal of using them for (or alongside) KGs. From the analysis of our interviews, we contribute a preliminary roadmap for the design of LLM-driven visual analysis systems and outline future opportunities in this emergent design space.

\ No newline at end of file diff --git a/program/paper_w-nlviz-1022.html b/program/paper_w-nlviz-1022.html new file mode 100644 index 000000000..974235b08 --- /dev/null +++ b/program/paper_w-nlviz-1022.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Enhancing Arabic Poetic Structure Analysis through Visualization

Enhancing Arabic Poetic Structure Analysis through Visualization

Abdelmalek Berkani - University of Neuchâtel, Neuchâtel, Switzerland

Adrian Holzer - University of Neuchâtel, Neuchâtel, Switzerland

Room: To Be Announced

Abstract

This study explores the potential of visual representation in understanding the structural elements of Arabic poetry, a subject of significant educational and research interest. Our objective is to make Arabic poetic works more accessible to readers of both Arabic and non-Arabic linguistic backgrounds by employing visualization, exploration, and analytical techniques. We transformed poetry texts into syllables, identified their metrical structures, segmented verses into patterns, and then converted these patterns into visual representations. Following this, we computed and visualized the dissimilarities between these images, and overlaid their differences. Our findings suggest that the positional patterns across a poem play a pivotal role in effective poetry clustering, as demonstrated by our newly computed metrics. The results of our clustering experiments showed a marked improvement over previous attempts, thereby providing new insights into the composition and structure of Arabic poetry. This study underscored the value of visual representation in enhancing our understanding of Arabic poetry.

\ No newline at end of file diff --git a/program/paper_w-storygenai-5237.html b/program/paper_w-storygenai-5237.html new file mode 100644 index 000000000..77ca4aeb5 --- /dev/null +++ b/program/paper_w-storygenai-5237.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: The Data-Wink Ratio: Emoji Encoder for Generating Semantically-Resonant Unit Charts

The Data-Wink Ratio: Emoji Encoder for Generating Semantically-Resonant Unit Charts

Matthew Brehmer - University of Waterloo, Waterloo, Canada. Tableau Research, Seattle, United States

Vidya Setlur - Tableau Research, Palo Alto, United States

Zoe Zoe - McGraw Hill, Seattle, United States. Tableau Software, Seattle, United States

Michael Correll - Northeastern University, Portland, United States

Room: To Be Announced

Abstract

Communicating data insights in an accessible and engaging manner to a broader audience remains a significant challenge. To address this problem, we introduce the Emoji Encoder, a tool that generates a set of emoji recommendations for the field and category names appearing in a tabular dataset. The selected set of emoji encodings can be used to generate configurable unit charts that combine plain text and emojis as word-scale graphics. These charts can serve to contrast values across multiple quantitative fields for each row in the data or to communicate trends over time. Any resulting chart is simply a block of text characters, meaning that it can be directly copied into a text message or posted on a communication platform such as Slack or Teams. This work represents a step toward our larger goal of developing novel, fun, and succinct data storytelling experiences that engage those who do not identify as data analysts. Emoji-based unit charts can offer contextual cues related to the data at the center of a conversation on platforms where emoji-rich communication is typical.

\ No newline at end of file diff --git a/program/paper_w-storygenai-6168.html b/program/paper_w-storygenai-6168.html new file mode 100644 index 000000000..feb438c86 --- /dev/null +++ b/program/paper_w-storygenai-6168.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Constraint representation towards precise data-driven storytelling

Constraint representation towards precise data-driven storytelling

Yu-Zhe Shi - The Hong Kong University of Science and Technology, Hong Kong, China

Haotian Li - The Hong Kong University of Science and Technology, Hong Kong, China

Lecheng Ruan - Peking University, Beijing, China

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Room: To Be Announced

Abstract

Data-driven storytelling serves as a crucial bridge for communicating ideas in a persuasive way. However, the manual creation of data stories is a multifaceted, labor-intensive, and case-specific effort, limiting their broader application. As a result, automating the creation of data stories has emerged as a significant research thrust. Despite advances in Artificial Intelligence, the systematic generation of data stories remains challenging due to their hybrid nature: they must frame a perspective based on a seed idea in a top-down manner, similar to traditional storytelling, while coherently grounding insights of given evidence in a bottom-up fashion, akin to data analysis. These dual requirements necessitate precise constraints on the permissible space of a data story. In this viewpoint, we propose integrating constraints into the data story generation process. Defined upon the hierarchies of interpretation and articulation, constraints shape both narrations and illustrations to align with seed ideas and contextualized evidence. We identify the taxonomy and required functionalities of these constraints. Although constraints can be heterogeneous and latent, we explore the potential to represent them in a computation-friendly fashion via Domain-Specific Languages. We believe that leveraging constraints will balance the artistic and engineering aspects of data story generation.

\ No newline at end of file diff --git a/program/paper_w-storygenai-7043.html b/program/paper_w-storygenai-7043.html new file mode 100644 index 000000000..4f8bd73fe --- /dev/null +++ b/program/paper_w-storygenai-7043.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: From Data to Story: Towards Automatic Animated Data Video Creation with LLM-based Multi-Agent Systems

From Data to Story: Towards Automatic Animated Data Video Creation with LLM-based Multi-Agent Systems

Leixian Shen - The Hong Kong University of Science and Technology, Hong Kong, China

Haotian Li - The Hong Kong University of Science and Technology, Hong Kong, China

Yun Wang - Microsoft, Beijing, China

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Room: To Be Announced

Abstract

Creating data stories from raw data is challenging due to humans’ limited attention spans and the need for specialized skills. Recent advancements in large language models (LLMs) offer great opportunities to develop systems with autonomous agents to streamline the data storytelling workflow. Though multi-agent systems have benefits such as fully realizing LLM potentials with decomposed tasks for individual agents, designing such systems also faces challenges in task decomposition, performance optimization for sub-tasks, and workflow design. To better understand these issues, we develop Data Director, an LLM-based multi-agent system designed to automate the creation of animated data videos, a representative genre of data stories. Data Director interprets raw data, breaks down tasks, designs agent roles to make informed decisions automatically, and seamlessly integrates diverse components of data videos. A case study demonstrates Data Director’s effectiveness in generating data videos. Throughout development, we have derived lessons learned from addressing challenges, guiding further advancements in autonomous agents for data storytelling. We also shed light on future directions for global optimization, human-in-the-loop design, and the application of advanced multi-modal LLMs.

\ No newline at end of file diff --git a/program/paper_w-storygenai-7072.html b/program/paper_w-storygenai-7072.html new file mode 100644 index 000000000..b7e9ce114 --- /dev/null +++ b/program/paper_w-storygenai-7072.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Show and Tell: Exploring Large Language Model’s Potential in Formative Educational Assessment of Data Stories

Show and Tell: Exploring Large Language Model’s Potential in Formative Educational Assessment of Data Stories

Naren Sivakumar - University of Maryland Baltimore County, Baltimore, United States

Lujie Karen Chen - University of Maryland, Baltimore County, Baltimore, United States

Pravalika Papasani - University of Maryland,Baltimore County, Baltimore, United States

Vigna Majmundar - University of maryland baltimore county, Hanover, United States

Jinjuan Heidi Feng - Towson University, Towson, United States

Louise Yarnall - SRI International, Menlo Park, United States

Jiaqi Gong - University of Alabama, Tuscaloosa, United States

Room: To Be Announced

Abstract

Crafting accurate and insightful narratives from data visualization is essential in data storytelling. Like creative writing, where one reads to write a story, data professionals must effectively ``read" visualizations to create compelling data stories. In education, helping students develop these skills can be achieved through exercises that ask them to create narratives from data plots, demonstrating both ``show" (describing the plot) and ``tell" (interpreting the plot). Providing formative feedback on these exercises is crucial but challenging in large-scale educational settings with limited resources. This study explores using GPT-4o, a multimodal LLM, to generate and evaluate narratives from data plots. The LLM was tested in zero-shot, one-shot, and two-shot scenarios, generating narratives and self-evaluating their depth. Human experts also assessed the LLM's outputs. Additionally, the study developed machine learning and LLM-based models to assess student-generated narratives using LLM-generated data. Human experts validated a subset of these machine assessments. The findings highlight the potential of LLMs to support scalable formative assessment in teaching data storytelling skills, which has important implications for AI-supported educational interventions.

\ No newline at end of file diff --git a/program/paper_w-topoinvis-1027.html b/program/paper_w-topoinvis-1027.html new file mode 100644 index 000000000..766b873b7 --- /dev/null +++ b/program/paper_w-topoinvis-1027.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Critical Point Extraction from Multivariate Functional Approximation

Critical Point Extraction from Multivariate Functional Approximation

Guanqun Ma - University of Utah, Salt Lake City, United States

David Lenz - Argonne National Laboratory, Lemont, United States

Tom Peterka - Argonne National Laboratory, Lemont, United States

Hanqi Guo - The Ohio State University, Columbus, United States

Bei Wang - University of Utah, Salt Lake City, United States

Room: To Be Announced

Abstract

Advances in high-performance computing require new ways to represent large-scale scientific data to support data storage, data transfers, and data analysis within scientific workflows. Multivariate functional approximation (MFA) has recently emerged as a new continuous meshless representation that approximates raw discrete data with a set of piecewise smooth functions. An MFA model of data thus offers a compact representation and supports high-order evaluation of values and derivatives anywhere in the domain. In this paper, we present CPE-MFA, the first critical point extraction framework designed for MFA models of large-scale, high-dimensional data. CPE-MFA extracts critical points directly from an MFA model without the need for discretization or resampling. This is the first step toward enabling continuous implicit models such as MFA to support topological data analysis at scale.

\ No newline at end of file diff --git a/program/paper_w-topoinvis-1031.html b/program/paper_w-topoinvis-1031.html new file mode 100644 index 000000000..d40550f8b --- /dev/null +++ b/program/paper_w-topoinvis-1031.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Asymptotic Topology of 3D Linear Symmetric Tensor Fields

Asymptotic Topology of 3D Linear Symmetric Tensor Fields

Xinwei Lin - Oregon State University, Corvallis, United States

Yue Zhang - Oregon State University, Corvallis, United States

Eugene Zhang - Oregon State University, Corvallis, United States

Room: To Be Announced

Abstract

3D symmetric tensor fields have a wide range of applications in science and engineering. The topology of such fields can provide critical insight into not only the structures in tensor fields but also their respective applications. Existing research focuses on the extraction of topological features such as degenerate curves and neutral surfaces. In this paper, we investigate the asymptotic behaviors of these topological features in the sphere of infinity. Our research leads to both theoretical analysis and observations that can aid further classifications of tensor field topology.

\ No newline at end of file diff --git a/program/paper_w-topoinvis-1033.html b/program/paper_w-topoinvis-1033.html new file mode 100644 index 000000000..52db4ea36 --- /dev/null +++ b/program/paper_w-topoinvis-1033.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Topological Simplifcation of Jacobi Sets for Piecewise-Linear Bivariate 2D Scalar Fields

Topological Simplifcation of Jacobi Sets for Piecewise-Linear Bivariate 2D Scalar Fields

Felix Raith - Leipzig University, Leipzig, Germany

Gerik Scheuermann - Leipzig University, Leipzig, Germany

Christian Heine - Leipzig University, Leipzig, Germany

Room: To Be Announced

Abstract

Jacobi sets are an important method to investigate the relationship between Morse functions. The Jacobi set for two Morse functions is the set of all points where the functions' gradients are linearly dependent. Both the segmentation of the domain by Jacobi sets and the Jacobi sets themselves have proven to be useful tools in multi-field visualization, data analysis in various applications, and for accelerating extraction algorithms. On a triangulated grid, they can be calculated by a piecewise linear interpolation. In practice, Jacobi sets can become very complex and large due to noise and numerical errors. Some techniques for simplifying Jacobi sets exist, but these only reduce individual elements such as noise or are purely theoretical. These techniques often only change the visual representation of the Jacobi sets, but not the underlying data. In this paper, we present an algorithm that simplifies the Jacobi sets for 2D bivariate scalar fields and at the same time modifies the underlying bivariate scalar fields while preserving the essential structures of the fields. We use a neighborhood graph to select the areas to be reduced and collapse these cells individually. We investigate the influence of different neighborhood graphs and present an adaptation for the visualization of Jacobi sets that take the collapsed cells into account. We apply our algorithm to a range of analytical and real-world data sets and compare it with established methods that also simplify the underlying bivariate scalar fields.

\ No newline at end of file diff --git a/program/paper_w-topoinvis-1034.html b/program/paper_w-topoinvis-1034.html new file mode 100644 index 000000000..cdb2eae83 --- /dev/null +++ b/program/paper_w-topoinvis-1034.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Revisiting Accurate Geometry for the Morse-Smale Complexes

Revisiting Accurate Geometry for the Morse-Smale Complexes

Son Le Thanh - KTH Royal Institute of Technology, Stockholm, Sweden

Michael Ankele - KTH Royal Institute of Technology, Stockholm, Sweden

Tino Weinkauf - KTH Royal Institute of Technology, Stockholm, Sweden

Room: To Be Announced

Abstract

The Morse-Smale complex is a standard tool in visual data analysis. The classic definition is based on a continuous view of the gradient of a scalar function where its zeros are the critical points. These points are connected via gradient curves and surfaces emanating from saddle points, known as separatrices. In a discrete setting, the Morse-Smale complex is commonly extracted by constructing a combinatorial gradient assuming the steepest descent direction. Previous works have shown that this method results in a geometric embedding of the separatrices that can be fundamentally different from those in the continuous case. To achieve a similar embedding, different approaches for constructing a combinatorial gradient were proposed. In this paper, we show that these approaches generate a different topology, i.e., the connectivity between critical points changes. Additionally, we demonstrate that the steepest descent method can compute topologically and geometrically accurate Morse-Smale complexes when applied to certain types of grids. Based on these observations, we suggest a method to attain both geometric and topological accuracy for the Morse-Smale complex of data sampled on a uniform grid.

\ No newline at end of file diff --git a/program/paper_w-topoinvis-1038.html b/program/paper_w-topoinvis-1038.html new file mode 100644 index 000000000..77bcf1ad6 --- /dev/null +++ b/program/paper_w-topoinvis-1038.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Multi-scale Cycle Tracking in Dynamic Planar Graphs

Multi-scale Cycle Tracking in Dynamic Planar Graphs

Farhan Rasheed - Linköping University, Linköping, Sweden

Abrar Naseer - Indian Institute of Science, Bangalore, India

Emma Nilsson - Linköping university, Norrköping, Sweden

Talha Bin Masood - Linköping University, Norrköping, Sweden

Ingrid Hotz - Linköping University, Norrköping, Sweden

Room: To Be Announced

Abstract

This paper presents a nested tracking framework for analyzing cycles in 2D force networks within granular materials. These materials are composed of interacting particles, whose interactions are described by a force network. Understanding the cycles within these networks at various scales and their evolution under external loads is crucial, as they significantly contribute to the mechanical and kinematic properties of the system. Our approach involves computing a cycle hierarchy by partitioning the 2D domain into regions bounded by cycles in the force network. We can adapt concepts from nested tracking graphs originally developed for merge trees by leveraging the duality between this partitioning and the cycles. We demonstrate the effectiveness of our method on two force networks derived from experiments with photo-elastic disks.

\ No newline at end of file diff --git a/program/paper_w-topoinvis-1041.html b/program/paper_w-topoinvis-1041.html new file mode 100644 index 000000000..155ee692e --- /dev/null +++ b/program/paper_w-topoinvis-1041.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Efficient representation and analysis for a large tetrahedral mesh using Apache Spark

Efficient representation and analysis for a large tetrahedral mesh using Apache Spark

Yuehui Qian - University of Maryland, College Park, College Park, United States

Guoxi Liu - Clemson University, Clemson, United States

Federico Iuricich - Clemson University, Clemson, United States

Leila De Floriani - University of Maryland, College Park, United States

Room: To Be Announced

Abstract

Tetrahedral meshes are widely used due to their flexibility and adaptability in representing changes of complex geometries and topology. However, most existing data structures struggle to efficiently encode the irregular connectivity of tetrahedral meshes with billions of vertices. We address this problem by proposing a novel framework for efficient and scalable analysis of large tetrahedral meshes using Apache Spark. The proposed framework, called Tetra-Spark, features optimized approaches to locally compute many connectivity relations by first retrieving the Vertex-Tetrahedron (VT) relation. This strategy significantly improves Tetra-Spark's efficiency in performing morphology computations on large tetrahedral meshes. To prove the effectiveness and scalability of such a framework, we conduct a comprehensive comparison against a vanilla Spark implementation for the analysis of tetrahedral meshes. Our experimental evaluation shows that Tetra-Spark achieves up to a 78x speedup and reduces memory usage by up to 80% when retrieving connectivity relations with the VT relation available. This optimized design further accelerates subsequent morphology computations, resulting in up to a 47.7x speedup.

\ No newline at end of file diff --git a/program/paper_w-uncertainty-1007.html b/program/paper_w-uncertainty-1007.html new file mode 100644 index 000000000..b4c91bb11 --- /dev/null +++ b/program/paper_w-uncertainty-1007.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles

Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles

Tadea Schmitz - University of Cologne, Cologne, Germany

Tim Gerrits - RWTH Aachen University, Aachen, Germany

Room: To Be Announced

Abstract

Symmetric second-order tensors are fundamental in various scientific and engineering domains, as they can represent properties such as material stresses or diffusion processes in brain tissue. In recent years, several approaches have been introduced and improved to analyze these fields using topological features, such as degenerate tensor locations, i.e., the tensor has repeated eigenvalues, or normal surfaces. Traditionally, the identification of such features has been limited to single tensor fields. However, it has become common to create ensembles to account for uncertainties and variability in simulations and measurements. In this work, we explore novel methods for describing and visualizing degenerate tensor locations in 3D symmetric second-order tensor field ensembles. We base our considerations on the tensor mode and analyze its practicality in characterizing the uncertainty of degenerate tensor locations before proposing a variety of visualization strategies to effectively communicate degenerate tensor information. We demonstrate our techniques for synthetic and simulation data sets.The results indicate that the interplay of different descriptions for uncertainty can effectively convey information on degenerate tensor locations.

\ No newline at end of file diff --git a/program/paper_w-uncertainty-1009.html b/program/paper_w-uncertainty-1009.html new file mode 100644 index 000000000..224d87003 --- /dev/null +++ b/program/paper_w-uncertainty-1009.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty

Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty

Chase Stokes - University of California Berkeley, Berkeley, United States

Chelsea Sanker - Stanford University, Stanford, United States

Bridget Cogley - Versalytix, Columbus, United States

Vidya Setlur - Tableau Research, Palo Alto, United States

Room: To Be Announced

Abstract

Understanding and communicating data uncertainty is crucial for informed decision-making across various domains, including finance, healthcare, and public policy. This study investigates the impact of gender and acoustic variables on decision-making, confidence, and trust through a crowdsourced experiment. We compared visualization-only representations of uncertainty to text-forward and speech-forward bimodal representations, including multiple synthetic voices across gender. Speech-forward representations led to an increase in risky decisions, and text-forward representations led to lower confidence. Contrary to prior work, speech-forward forecasts did not receive higher ratings of trust. Higher normalized pitch led to a slight increase in decision confidence, but other voice characteristics had minimal impact on decisions and trust. An exploratory analysis of accented speech showed consistent results with the main experiment and additionally indicated lower trust ratings for information presented in Indian and Kenyan accents. The results underscore the importance of considering acoustic and contextual factors in presentation of data uncertainty.

\ No newline at end of file diff --git a/program/paper_w-uncertainty-1010.html b/program/paper_w-uncertainty-1010.html new file mode 100644 index 000000000..b1ecb0e43 --- /dev/null +++ b/program/paper_w-uncertainty-1010.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Uncertainty-Informed Volume Visualization using Implicit Neural Representation

Uncertainty-Informed Volume Visualization using Implicit Neural Representation

Shanu Saklani - IIT kanpur , Kanpur , India

Chitwan Goel - Indian Institute of Technology Kanpur, Kanpur, India

Shrey Bansal - Indian Institute of Technology Kanpur, Kanpur, India

Zhe Wang - Oak Ridge National Laboratory, Oak Ridge, United States

Soumya Dutta - Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

David Pugmire - Oak Ridge National Laboratory, Oak Ridge, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Room: To Be Announced

Abstract

The increasing adoption of Deep Neural Networks (DNNs) has led to their application in many challenging scientific visualization tasks. While advanced DNNs offer impressive generalization capabilities, understanding factors such as model prediction quality, robustness, and uncertainty is crucial. These insights can enable domain scientists to make informed decisions about their data. However, DNNs inherently lack ability to estimate prediction uncertainty, necessitating new research to construct robust uncertainty-aware visualization techniques tailored for various visualization tasks. In this work, we propose uncertainty-aware implicit neural representations to model scalar field data sets effectively and comprehensively study the efficacy and benefits of estimated uncertainty information for volume visualization tasks. We evaluate the effectiveness of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout (MCDropout). These techniques enable uncertainty-informed volume visualization in scalar field data sets. Our extensive exploration across multiple data sets demonstrates that uncertainty-aware models produce informative volume visualization results. Moreover, integrating prediction uncertainty enhances the trustworthiness of our DNN model, making it suitable for robustly analyzing and visualizing real-world scientific volumetric data sets.

\ No newline at end of file diff --git a/program/paper_w-uncertainty-1011.html b/program/paper_w-uncertainty-1011.html new file mode 100644 index 000000000..05d66a0bc --- /dev/null +++ b/program/paper_w-uncertainty-1011.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: UADAPy: An Uncertainty-Aware Visualization and Analysis Toolbox

UADAPy: An Uncertainty-Aware Visualization and Analysis Toolbox

Patrick Paetzold - University of Konstanz, Konstanz, Germany

David Hägele - University of Stuttgart, Stuttgart, Germany

Marina Evers - University of Stuttgart, Stuttgart, Germany

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Oliver Deussen - University of Konstanz, Konstanz, Germany

Room: To Be Announced

Abstract

Current research provides methods to communicate uncertainty and adapts classical algorithms of the visualization pipeline to take the uncertainty into account. Various existing visualization frameworks include methods to present uncertain data but do not offer transformation techniques tailored to uncertain data. Therefore, we propose a software package for uncertainty-aware data analysis in Python (UADAPy) offering methods for uncertain data along the visualization pipeline. We aim to provide a platform that is the foundation for further integration of uncertainty algorithms and visualizations. It provides common utility functionality to support research in uncertainty-aware visualization algorithms and makes state-of-the-art research results accessible to the end user. The project is available at https://github.com/UniStuttgart-VISUS/uadapy.

\ No newline at end of file diff --git a/program/paper_w-uncertainty-1012.html b/program/paper_w-uncertainty-1012.html new file mode 100644 index 000000000..fa32f065e --- /dev/null +++ b/program/paper_w-uncertainty-1012.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: FunM^2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices

FunM^2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices

Gautam Hari - Indiana University Bloomington, Bloomington, United States

Nrushad A Joshi - Indiana University Bloomington, Bloomington, United States

Zhe Wang - Oak Ridge National Laboratory, Oak Ridge, United States

Qian Gong - Oak Ridge National Laboratory, Oak Ridge, United States

David Pugmire - Oak Ridge National Laboratory, Oak Ridge, United States

Kenneth Moreland - Oak Ridge National Laboratory, Oak Ridge, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Scott Klasky - Oak Ridge National Laboratory, Oak Ridge, United States

Norbert Podhorszki - Oak Ridge National Laboratory, Oak Ridge, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Room: To Be Announced

Abstract

Uncertainty visualization is an emerging research topic in data vi- sualization because neglecting uncertainty in visualization can lead to inaccurate assessments. In this short paper, we study the prop- agation of multivariate data uncertainty in visualization. Although there have been a few advancements in probabilistic uncertainty vi- sualization of multivariate data, three critical challenges remain to be addressed. First, state-of-the-art probabilistic uncertainty visual- ization framework is limited to bivariate data (two variables). Sec- ond, the existing uncertainty visualization algorithms use compu- tationally intensive techniques and lack support for cross-platform portability. Third, as a consequence of the computational expense, integration into interactive production visualization tools is imprac- tical. In this work, we address all three issues and make a threefold contribution. First, we generalize the state-of-the-art probabilis- tic framework for bivariate data to multivariate data with a arbi- trary number of variables. Second, through utilization of VTK-m’s shared-memory parallelism and cross-platform compatibility fea- tures, we demonstrate acceleration of multivariate uncertainty visu- alization on different many-core architectures, including OpenMP and AMD GPUs. Third, we demonstrate the integration of our al- gorithms with the ParaView software. We demonstrate utility of our algorithms through experiments on multivariate simulation data.

\ No newline at end of file diff --git a/program/paper_w-uncertainty-1013.html b/program/paper_w-uncertainty-1013.html new file mode 100644 index 000000000..d61213cf1 --- /dev/null +++ b/program/paper_w-uncertainty-1013.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Field

Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Field

Timbwaoga A. J. Ouermi - Scientific Computing and Imaging Institute, Salk Lake City, United States

Jixian Li - University of Utah, Salt Lake City, United States

Zachary Morrow - Sandia National Laboratories, Albuquerque, United States

Bart van Bloemen Waanders - Sandia National Laboratories, Albuquerque, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Room: To Be Announced

Abstract

Uncertainty is inherent to most data, including vector field data, yet it is often omitted in visualizations and representations. Effective uncertainty visualization can enhance the understanding and interpretability of vector field data. For instance, in the context of severe weather events such as hurricanes and wildfires, effective uncertainty visualization can provide crucial insights about fire spread or hurricane behavior and aid in resource management and risk mitigation. Glyphs are commonly used for representing vector uncertainty but are often limited to 2D. In this work, we present a glyph-based technique for accurately representing 3D vector uncertainty and a comprehensive framework for visualization, exploration, and analysis using our new glyphs. We employ hurricane and wildfire examples to demonstrate the efficacy of our glyph design and visualization tool in conveying vector field uncertainty.

\ No newline at end of file diff --git a/program/paper_w-uncertainty-1014.html b/program/paper_w-uncertainty-1014.html new file mode 100644 index 000000000..bfa537ed9 --- /dev/null +++ b/program/paper_w-uncertainty-1014.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods

Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods

Timbwaoga A. J. Ouermi - Scientific Computing and Imaging Institute, Salk Lake City, United States

Jixian Li - University of Utah, Salt Lake City, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Room: To Be Announced

Abstract

Isosurface visualization is fundamental for exploring and analyzing 3D volumetric data. Marching cubes (MC) algorithms with linear interpolation are commonly used for isosurface extraction and visualization. Although linear interpolation is easy to implement, it has limitations when the underlying data is complex and high-order, which is the case for most real-world data. Linear interpolation can output vertices at the wrong location. Its inability to deal with sharp features and features smaller than grid cells can create holes and broken pieces in the extracted isosurface. Despite these limitations, isosurface visualizations typically do not include insight into the spatial location and the magnitude of these errors. We utilize high-order interpolation methods with MC algorithms and interactive visualization to highlight these uncertainties. Our visualization tool helps identify the regions of high interpolation errors. It also allows users to query local areas for details and compare the differences between isosurfaces from different interpolation methods. In addition, we employ high-order methods to identify and reconstruct possible features that linear methods cannot detect. We showcase how our visualization tool helps explore and understand the extracted isosurface errors through synthetic and real-world data.

\ No newline at end of file diff --git a/program/paper_w-uncertainty-1015.html b/program/paper_w-uncertainty-1015.html new file mode 100644 index 000000000..4f0b11092 --- /dev/null +++ b/program/paper_w-uncertainty-1015.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Accelerated Depth Computation for Surface Boxplots with Deep Learning

Accelerated Depth Computation for Surface Boxplots with Deep Learning

Mengjiao Han - University of Utah, Salt Lake City, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Jixian Li - University of Utah, Salt Lake City, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Room: To Be Announced

Abstract

Functional depth is a well-known technique used to derive descriptive statistics (e.g., median, quartiles, and outliers) for 1D data. Surface boxplots extend this concept to ensembles of images, helping scientists and users identify representative and outlier images. However, the computational time for surface boxplots increases cubically with the number of ensemble members, making it impractical for integration into visualization tools. In this paper, we propose a deep-learning solution for efficient depth prediction and computation of surface boxplots for time-varying ensemble data. Our deep learning framework accurately predicts member depths in a surface boxplot, achieving average speedups of 6X on a CPU and 15X on a GPU for the 2D Red Sea dataset with 50 ensemble members compared to the traditional depth computation algorithm. Our approach achieves at least a 99\% level of rank preservation, with order flipping occurring only at pairs with extremely similar depth values that pose no statistical differences. This local flipping does not significantly impact the overall depth order of the ensemble members.

\ No newline at end of file diff --git a/program/paper_w-uncertainty-1016.html b/program/paper_w-uncertainty-1016.html new file mode 100644 index 000000000..37c8b3a64 --- /dev/null +++ b/program/paper_w-uncertainty-1016.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations

Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations

Jixian Li - University of Utah, Salt Lake City, United States

Timbwaoga A. J. Ouermi - Scientific Computing and Imaging Institute, Salk Lake City, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Room: To Be Announced

Abstract

Wildfire poses substantial risks to our health, environment, and economy. Studying wildfire is challenging due to its complex inter- action with the atmosphere dynamics and the terrain. Researchers have employed ensemble simulations to study the relationship be- tween variables and mitigate uncertainties in unpredictable initial conditions. However, many domain scientists are unaware of the advanced visualization tools available for conveying uncertainty. To bring some uncertainty visualization techniques, we build an interactive visualization system that utilizes a band-depth-based method that provides a statistical summary and visualization for fire front contours from the ensemble. We augment the visualiza- tion system with capabilities to study wildfires as a dynamic system. In this paper, We demonstrate how our system can support domain scientists in studying fire spread patterns, identifying outlier simu- lations, and navigating to interesting instances based on a summary of events.

\ No newline at end of file diff --git a/program/paper_w-uncertainty-1017.html b/program/paper_w-uncertainty-1017.html new file mode 100644 index 000000000..296e1472b --- /dev/null +++ b/program/paper_w-uncertainty-1017.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models

Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models

Sam Molnar - National Renewable Energy Lab, Golden, United States

J.D. Laurence-Chasen - National Renewable Energy Laboratory, Golden, United States

Yuhan Duan - The Ohio State University, Columbus, United States. National Renewable Energy Lab, Golden, United States

Julie Bessac - National Renewable Energy Laboratory, Golden, United States

Kristi Potter - National Renewable Energy Laboratory, Golden, United States

Room: To Be Announced

Abstract

Uncertainty visualization is a key component in translating important insights from ensemble data into actionable decision-making by visually conveying various aspects of uncertainty within a system. With the recent advent of fast surrogate models for computationally expensive simulations, users can interact with more aspects of data spaces than ever before. However, the integration of ensemble data with surrogate models in a decision-making tool brings up new challenges for uncertainty visualization, namely how to reconcile and communicate the new and different types of uncertainties brought in by surrogates and how to utilize these new data estimates in actionable ways. In this work, we examine these issues as they relate to high-dimensional data visualization, the integration of discrete datasets and the continuous representations of those datasets, and the unique difficulties associated with systems that allow users to iterate between input and output spaces. We assess the role of uncertainty visualization in facilitating intuitive and actionable interaction with ensemble data and surrogate models, and highlight key challenges in this new frontier of computational simulation.

\ No newline at end of file diff --git a/program/paper_w-uncertainty-1018.html b/program/paper_w-uncertainty-1018.html new file mode 100644 index 000000000..e057e716d --- /dev/null +++ b/program/paper_w-uncertainty-1018.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Effects of Forecast Number, Order, and Cost in Multiple Forecast Visualizations

Effects of Forecast Number, Order, and Cost in Multiple Forecast Visualizations

Laura Matzen - Sandia National Laboratories, Albuquerque, United States

Mallory C Stites - Sandia National Laboratories, Albuquerque, United States

Kristin M Divis - Sandia National Laboratories, Albuquerque, United States

Alexander Bendeck - Georgia Institute of Technology, Atlanta, United States

John Stasko - Georgia Institute of Technology, Atlanta, United States

Lace M. Padilla - Northeastern University, Boston, United States

Room: To Be Announced

Abstract

Although people frequently make decisions based on uncertain forecasts about future events, there is little guidance about how best to represent the uncertainty in forecasts. One common approach is to use multiple forecast visualizations, in which multiple forecasts are plotted on the same graph. This provides an implicit representation of the uncertainty in the data, but it is not clear how many forecasts to show, or how viewers might be influenced by seeing the more extreme forecasts rather than those closer to the mean. In this study, we showed participants forecasts of wind speed data and they made decisions based on their predictions about the future wind speed. We allowed participants to choose how many forecasts to view prior to making a decision, and we manipulated the ordering of the forecasts and the cost of each additional forecast. We found that participants viewed more forecasts when the outcome was more ambiguous. The order of the forecasts had little impact on their decisions when there was no cost for the additional information. However, when there was a cost for each forecast, the participants were much more likely to make a guess based on only the first forecast shown. In this case, showing one of the extreme forecasts first led to less optimal decisions.

\ No newline at end of file diff --git a/program/paper_w-uncertainty-1019.html b/program/paper_w-uncertainty-1019.html new file mode 100644 index 000000000..a60950b2b --- /dev/null +++ b/program/paper_w-uncertainty-1019.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations

An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations

Robert Sisneros - University of Illinois Urbana-Champaign, Urbana, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Kenneth Moreland - Oak Ridge National Laboratory, Oak Ridge, United States

David Pugmire - Oak Ridge National Laboratory, Oak Ridge, United States

Room: To Be Announced

Abstract

We present a simple comparative framework for testing and developing uncertainty modeling in uncertain marching cubes implementations. The selection of a model to represent the probability distribution of uncertain values directly influences the memory use, run time, and accuracy of an uncertainty visualization algorithm. We use an entropy calculation directly on ensemble data to establish an expected result and then compare the entropy from various probability models, including uniform, Gaussian, histogram, and quantile models. Our results verify that models matching the distribution of the ensemble indeed match the entropy. We further show that fewer bins in nonparametric histogram models are more effective whereas large numbers of bins in quantile models approach data accuracy.

\ No newline at end of file diff --git a/program/paper_w-vis4climate-1000.html b/program/paper_w-vis4climate-1000.html new file mode 100644 index 000000000..23dd23f57 --- /dev/null +++ b/program/paper_w-vis4climate-1000.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: TEST - Le papier

TEST - Le papier

Fanny Chevalier - University of Toronto, Toronto, Canada

Room: To Be Announced

Abstract

re

\ No newline at end of file diff --git a/program/paper_w-vis4climate-1008.html b/program/paper_w-vis4climate-1008.html new file mode 100644 index 000000000..21c246c15 --- /dev/null +++ b/program/paper_w-vis4climate-1008.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Local Climate Data Stories: Data-driven Storytelling to Communicate Effects and Mitigation of Climate Change in a Local Context

Local Climate Data Stories: Data-driven Storytelling to Communicate Effects and Mitigation of Climate Change in a Local Context

Fabian Beck - University of Bamberg, Bamberg, Germany

Lukas Panzer - University of Bamberg, Bamberg, Germany

Marc Redepenning - University of Bamberg, Bamberg, Germany

Room: To Be Announced

Abstract

Presenting the effects of and effective countermeasures for climate change is a significant challenge in science communication. Data-driven storytelling and narrative visualization can be part of the solution. However, the communication is limited when restricted to global or cross-regional scales, as climate effects are particular to the location and adaptions need to be local. In this work, we focus on data-driven storytelling that communicates local impacts of climate change. We analyze the adoption of data-driven storytelling by local news media in addressing climate-related topics. Further, we investigate the specific characteristics of the local scenario and present three application examples to showcase potential local data-driven stories. Since these examples are rooted in university teaching, we also discuss educational aspects. Finally, we summarize the interdisciplinary research challenges and opportunities for application associated with data-driven storytelling in a local context.

\ No newline at end of file diff --git a/program/paper_w-vis4climate-1011.html b/program/paper_w-vis4climate-1011.html new file mode 100644 index 000000000..db0fd820e --- /dev/null +++ b/program/paper_w-vis4climate-1011.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: EcoViz: an iterative methodology for designing multifaceted data-driven environmental visualizations that communicate ecosystem impacts and envision nature-based solutions

EcoViz: an iterative methodology for designing multifaceted data-driven environmental visualizations that communicate ecosystem impacts and envision nature-based solutions

Jessica Marielle Kendall-Bar - University of California, San Diego, San Diego, United States

Isaac Nealey - University of California, San Diego, La Jolla, United States

Ian Costello - University of California, Santa Cruz, Santa Cruz, United States

Christopher Lowrie - University of California, Santa Cruz, Santa Cruz, United States

Kevin Huynh Nguyen - University of California, San Diego, San Diego, United States

Paul J. Ponganis - University of California San Diego, La Jolla, United States

Michael W. Beck - University of California, Santa Cruz, Santa Cruz, United States

İlkay Altıntaş - University of California, San Diego, San Diego, United States

Room: To Be Announced

Abstract

Climate change’s global impact calls for coordinated visualization efforts to enhance collaboration and communication among key partners such as domain experts, community members, and policy makers. We present a collaborative initiative, EcoViz, where visualization practitioners and key partners co-designed environmental data visualizations to illustrate impacts on ecosystems and the benefit of informed management and nature-based solutions. Our three use cases rely on unique processing pipelines to represent time-dependent natural phenomena by combining cinematic, scientific, and information visualization methods. Scientific outputs are displayed through narrative data-driven animations, interactive geospatial web applications, and immersive Unreal Engine applications. Each field’s decision-making process is specific, driving design decisions about the best representation and medium for each use case. Data-driven cinematic videos with simple charts and minimal annotations proved most effective for engaging large, diverse audiences. This flexible medium facilitates reuse, maintains critical details, and integrates well into broader narrative videos. The need for interdisciplinary visualizations highlights the importance of funding to integrate visualization practitioners throughout the scientific process to better translate data and knowledge into informed policy and practice.

\ No newline at end of file diff --git a/program/paper_w-vis4climate-1018.html b/program/paper_w-vis4climate-1018.html new file mode 100644 index 000000000..97b9fbe51 --- /dev/null +++ b/program/paper_w-vis4climate-1018.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Eco-Garden: A Data Sculpture to Encourage Sustainable Practices in Everyday Life in Households

Eco-Garden: A Data Sculpture to Encourage Sustainable Practices in Everyday Life in Households

Dushani Ushettige - Cardiff University, UK, Cardiff, United Kingdom

Nervo Verdezoto - Cardiff University, Cardiff, United Kingdom

Simon Lannon - Cardiff University, Cardiff, United Kingdom

Jullie Gwilliam - Cardiff Universiy, Cardiff, United Kingdom

Parisa Eslambolchilar - Cardiff University, Cardiff, United Kingdom

Room: To Be Announced

Abstract

Household consumption significantly impacts climate change. Yet designing interventions to encourage consumption reduction that are tailored to each home's needs remains challenging. To address this, we developed Eco-Garden, a data sculpture designed to visualise household consumption aiming to promote sustainable practices. Eco-Garden serves as both an aesthetic piece for visitors and a functional tool for household members to understand their resource consumption. In this paper, we present the human-centred design process of Eco-Garden and the preliminary findings we made through the field study. We conducted a field study with 15 households to explore participants' experience with Eco-Garden and its potential to encourage sustainable practices at home. Our participants provided positive feedback on integrating Eco-Garden into their homes, highlighting considerations such as aesthetics, physicality, calm manner of presenting consumption data. Our Insights contribute to developing data sculptures for households that can facilitate meaningful interactions with consumption data.

\ No newline at end of file diff --git a/program/paper_w-vis4climate-1023.html b/program/paper_w-vis4climate-1023.html new file mode 100644 index 000000000..e0456b623 --- /dev/null +++ b/program/paper_w-vis4climate-1023.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: AwARe: Using handheld augmented reality for researching the potential of food resource information visualization

AwARe: Using handheld augmented reality for researching the potential of food resource information visualization

Nina Rosa - Wageningen University and Research, Wageningen, Netherlands

Room: To Be Announced

Abstract

Consumers have the potential to play a large role in mitigating the climate crisis by taking on more pro-environmental behavior, for example by making more sustainable food choices. However, while environmental awareness is common among consumers, it is not always clear what the current impact of one's own food choices are, and consequently it is not always clear how or why their own behavior must change, or how important the change is. Immersive technologies have been shown to aid in these aspects. In this paper, we bring food production into the home by means of handheld augmented reality. Using the current prototype, users can input which ingredients are in their meal on their smartphone, and after making a 3D scan of their kitchen, plants, livestock, feed, and water required for all are visualized in front of them. In this paper, we describe the design of the current prototype and, by analyzing the current state of research on virtual and augmented reality for sustainability research, we describe in which ways the application could be extended in terms of data, models, and interaction, to investigate the most prominent issues within environmental sustainability communications research.

\ No newline at end of file diff --git a/program/paper_w-vis4climate-1024.html b/program/paper_w-vis4climate-1024.html new file mode 100644 index 000000000..0626d54fb --- /dev/null +++ b/program/paper_w-vis4climate-1024.html @@ -0,0 +1,127 @@ + IEEE VIS 2024 Content: Cultivating Climate Action Through Multi-Institutional Collaboration: Innovative Data Visualization Educational Programs and Exhibits for Public Engagement

Cultivating Climate Action Through Multi-Institutional Collaboration: Innovative Data Visualization Educational Programs and Exhibits for Public Engagement

Beth Altringer Eagle - Brown University, Providence, United States. Rhode Island School of Design, Providence, United States

Elisabeth Sylvan - Harvard University, Cambridge, United States

Room: To Be Announced

Abstract

This paper details the development and implementation of a collaborative exhibit at Boston’s Museum of Science showcasing interactive data visualizations designed to educate the public on global sustainability and urban environmental concerns. Supported by cross-institutional collaboration, the exhibit provided a rich real-world learning opportunity for students, resulting in a set of public-facing educational resources that informed visitors of global sustainability concerns through the lens of a local municipality. The realization of this project was made possible only by a close collaboration between a municipality, science museum and academic partners, all who committed their expertise and resources at both leadership and implementation team levels.This initiative highlights the value of cross-institutional collaboration to ignite the transformative potential of interactive visualizations in driving public engagement of local and global sustainability issues. Focusing on promoting sustainability and enhancing community well-being, this initiative highlights the potential of cross-institutional collaboration and locally-relevant interactive data visualizations to educate, inspire action, and foster community engagement in addressing climate change and urban sustainability.

\ No newline at end of file diff --git a/program/paperlist.html b/program/paperlist.html index 6684bf2ae..a122ba571 100644 --- a/program/paperlist.html +++ b/program/paperlist.html @@ -1 +1 @@ - ## Full Papers ## Short papers \ No newline at end of file + ## Full Papers ## Short papers **Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent**
Authors: {'name': 'Yannick Metz', 'email': 'yannick.metz@uni-konstanz.de', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': False}, {'name': 'Dennis Ackermann', 'email': 'dennis-fabian.ackermann@uni-konstanz.de', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': False}, {'name': 'Daniel Keim', 'email': 'keim@uni-konstanz.de', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': False}, {'name': 'Maximilian T. Fischer', 'email': 'max.fischer@uni-konstanz.de', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': True} **Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop**
Authors: {'name': 'Jen Rogers', 'email': 'jen@cs.tufts.edu', 'affiliations': ['Tufts University, Boston, United States'], 'is_corresponding': True}, {'name': 'Mehdi Chakhchoukh', 'email': 'mehdi.chakhchoukh@universite-paris-saclay.fr', 'affiliations': ['Université Paris-Saclay, CNRS, INRIA, Orsay, France'], 'is_corresponding': False}, {'name': 'Marie Anastacio', 'email': 'anastacio@aim.rwth-aachen.de', 'affiliations': ['Leiden Universiteit, Leiden, Netherlands'], 'is_corresponding': False}, {'name': 'Rebecca Faust', 'email': 'rfaust1@tulane.edu', 'affiliations': ['Tulane University, New Orleans, United States'], 'is_corresponding': False}, {'name': 'Cagatay Turkay', 'email': 'cagatay.turkay@warwick.ac.uk', 'affiliations': ['University of Warwick, Coventry, United Kingdom'], 'is_corresponding': False}, {'name': 'Lars Kotthoff', 'email': 'larsko@uwyo.edu', 'affiliations': ['University of Wyoming, Laramie, United States'], 'is_corresponding': False}, {'name': 'Steffen Koch', 'email': 'steffen.koch@vis.uni-stuttgart.de', 'affiliations': ['University of Stuttgart, Stuttgart, Germany'], 'is_corresponding': False}, {'name': 'Andreas Kerren', 'email': 'andreas.kerren@liu.se', 'affiliations': ['Linköping University, Norrköping, Sweden'], 'is_corresponding': False}, {'name': 'Jürgen Bernard', 'email': 'bernard@ifi.uzh.ch', 'affiliations': ['University of Zurich, Zurich, Switzerland'], 'is_corresponding': False} **The Categorical Data Map: A Multidimensional Scaling-Based Approach**
Authors: {'name': 'Frederik L. Dennig', 'email': 'frederik.dennig@uni-konstanz.de', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': True}, {'name': 'Lucas Joos', 'email': 'lucas.joos@uni-konstanz.de', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': False}, {'name': 'Patrick Paetzold', 'email': 'patrick.paetzold@uni-konstanz.de', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': False}, {'name': 'Daniela Blumberg', 'email': 'blumbergdaniela@gmail.com', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': False}, {'name': 'Oliver Deussen', 'email': 'oliver.deussen@uni-konstanz.de', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': False}, {'name': 'Daniel Keim', 'email': 'keim@uni-konstanz.de', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': False}, {'name': 'Maximilian T. Fischer', 'email': 'max.fischer@uni-konstanz.de', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': False} **Towards a Visual Perception-Based Analysis of Clustering Quality Metrics**
Authors: {'name': 'Graziano Blasilli', 'email': 'blasilli@diag.uniroma1.it', 'affiliations': ['Sapienza University of Rome, Rome, Italy'], 'is_corresponding': True}, {'name': 'Daniel Kerrigan', 'email': 'kerrigan.d@northeastern.edu', 'affiliations': ['Northeastern University, Boston, United States'], 'is_corresponding': False}, {'name': 'Enrico Bertini', 'email': 'e.bertini@northeastern.edu', 'affiliations': ['Northeastern University, Boston, United States'], 'is_corresponding': False}, {'name': 'Giuseppe Santucci', 'email': 'santucci@diag.uniroma1.it', 'affiliations': ['Sapienza University of Rome, Rome, Italy'], 'is_corresponding': False} **Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems**
Authors: {'name': 'Yongsu Ahn', 'email': 'yongsu.ahn@pitt.edu', 'affiliations': ['University of Pittsburgh, Pittsburgh, United States'], 'is_corresponding': True}, {'name': 'Quinn K Wolter', 'email': 'quinnkwolter@gmail.com', 'affiliations': ['School of Computing and Information, University of Pittsburgh, Pittsburgh, United States'], 'is_corresponding': False}, {'name': 'Jonilyn Dick', 'email': 'jonilyndick@gmail.com', 'affiliations': ['Quest Diagnostics, Pittsburgh, United States'], 'is_corresponding': False}, {'name': 'Janet Dick', 'email': 'janetad99@gmail.com', 'affiliations': ['Quest Diagnostics, Pittsburgh, United States'], 'is_corresponding': False}, {'name': 'Yu-Ru Lin', 'email': 'yurulin@pitt.edu', 'affiliations': ['University of Pittsburgh, Pittsburgh, United States'], 'is_corresponding': False} **Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs**
Authors: {'name': 'Raphael Buchmüller', 'email': 'raphael.buchmueller@uni-konstanz.de', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': True}, {'name': 'Friederike Körte', 'email': 'friederike.koerte@uni-konstanz.de', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': False}, {'name': 'Daniel Keim', 'email': 'keim@uni-konstanz.de', 'affiliations': ['University of Konstanz, Konstanz, Germany'], 'is_corresponding': False} \ No newline at end of file diff --git a/program/papers.json b/program/papers.json index 54a568ea6..d00cdc8d6 100644 --- a/program/papers.json +++ b/program/papers.json @@ -1 +1 @@ -[{"UID":"v-short-1040","abstract":"From dirty data to intentional deception, there are many threats to the validity of data-driven decisions. Making use of data, especially new or unfamiliar data, therefore requires a degree of trust or verification. How is this trust established? In this paper, we present the results of a series of interviews with both producers and consumers of data artifacts (outputs of data ecosystems like spreadsheets, charts, and dashboards) aimed at understanding strategies and obstacles to building trust in data. We find a recurring need, but lack of existing standards, for data validation and verification, especially among data consumers. We therefore propose a set of data guards: methods and tools for fostering trust in data artifacts.","authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"nicole.sultanum@gmail.com","is_corresponding":true,"name":"Nicole Sultanum"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"bromley.denny@gmail.com","is_corresponding":false,"name":"Dennis Bromley"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1040","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Data Guards: Challenges and Solutions for Fostering Trust in Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1047","abstract":"In the rapidly evolving field of deep learning, the traditional methodologies for designing deep learning models predominantly rely on code-based frameworks. While these approaches provide flexibility, they also create a significant barrier to entry for non-experts and obscure the immediate impact of architectural decisions on model performance. In response to this challenge, recent no-code approaches have been developed with the aim of enabling easy model development through graphical interfaces. However, both traditional and no-code methodologies share a common limitation that the inability to predict model outcomes or identify issues without executing the model. To address this limitation, we introduce an intuitive visual feedback-based no-code approach to visualize and analyze deep learning models during the design phase. This approach utilizes dataflow-based visual programming with dynamic visual encoding of model architecture. A user study was conducted with deep learning developers to demonstrate the effectiveness of our approach in enhancing the model design process, improving model understanding, and facilitating a more intuitive development experience. The findings of this study suggest that real-time architectural visualization significantly contributes to more efficient model development and a deeper understanding of model behaviors.","authors":[{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of","Korea University, Seoul, Korea, Republic of"],"email":"juny0603@gmail.com","is_corresponding":true,"name":"JunYoung Choi"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of"],"email":"wings159@vience.co.kr","is_corresponding":false,"name":"Sohee Park"},{"affiliations":["Korea University, Seoul, Korea, Republic of"],"email":"hellenkoh@gmail.com","is_corresponding":false,"name":"GaYeon Koh"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of"],"email":"k0seo0330@vience.co.kr","is_corresponding":false,"name":"Youngseo Kim"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of","Korea University, Seoul, Korea, Republic of"],"email":"wkjeong@korea.ac.kr","is_corresponding":false,"name":"Won-Ki Jeong"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1047","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Intuitive Design of Deep Learning Models through Visual Feedback","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1049","abstract":"This comparative study evaluates various neural surface reconstruction methods, particularly focusing on their implications for scientific visualization through reconstructing 3D surfaces via multi-view rendering images. We categorize ten methods into neural radiance fields and neural implicit surfaces, uncovering the benefits of leveraging distance functions (i.e., SDFs and UDFs) to enhance the accuracy and smoothness of the reconstructed surfaces. Our findings highlight the efficiency and quality of NeuS2 for reconstructing closed surfaces and identify NeUDF as a promising candidate for reconstructing open surfaces despite some limitations. We further pinpoint directions for future research, including improving detail capture, optimizing UDF computations, and refining surface extraction methods. By sharing our benchmark dataset, we invite researchers to test the performance of their methods, contributing to the advancement of surface reconstruction solutions for scientific visualization.","authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"syao2@nd.edu","is_corresponding":true,"name":"Siyuan Yao"},{"affiliations":["Wuhan University, Wuhan, China"],"email":"song.wx@whu.edu.cn","is_corresponding":false,"name":"Weixi Song"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1049","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"A Comparative Study of Neural Surface Reconstruction for Scientific Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1054","abstract":"Direct volume rendering using ray-casting is widely used in practice. By using GPUs and applying acceleration techniques as empty space skipping, high frame rates are possible on modern hardware. This enables performance-critical use-cases such as virtual reality volume rendering. The currently fastest known technique uses volumetric distance maps to skip empty sections of the volume during ray-casting but requires the distance map to be updated per transfer function change. In this paper, we demonstrate a technique for subdividing the volume intensity range into partitions and deriving what we call partitioned distance maps. These can be used to accelerate the distance map computation for a newly changed transfer function by a factor up to 30. This allows the currently fastest known empty space skipping approach to be used while maintaining high frame rates even when the transfer function is changed frequently.","authors":[{"affiliations":["University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria"],"email":"michael.rauter@fhwn.ac.at","is_corresponding":true,"name":"Michael Rauter"},{"affiliations":["Medical University of Vienna, Vienna, Austria"],"email":"lukas.a.zimmermann@meduniwien.ac.at","is_corresponding":false,"name":"Lukas Zimmermann PhD"},{"affiliations":["University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria"],"email":"markus.zeilinger@fhwn.ac.at","is_corresponding":false,"name":"Markus Zeilinger PhD"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1054","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Accelerating Transfer Function Update for Distance Map based Volume Rendering","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1056","abstract":"We present FCNR, a fast compressive neural representation for tens of thousands of visualization images under varying viewpoints and timesteps. The existing NeRVI solution, albeit enjoying a high compression rate, incurs slow speeds in encoding and decoding. Built on the recent advances in stereo image compression, FCNR assimilates stereo context modules and joint context transfer modules to compress image pairs. Our solution significantly improves encoding and decoding speed while maintaining high reconstruction quality and satisfying compression rate. To demonstrate its effectiveness, we compare FCNR with state-of-the-art neural compression methods, including E-NeRV, HNeRV, NeRVI, and ECSIC.","authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"ylu25@nd.edu","is_corresponding":true,"name":"Yunfei Lu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"pgu@nd.edu","is_corresponding":false,"name":"Pengfei Gu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1056","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"FCNR: Fast Compressive Neural Representation of Visualization Images","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1057","abstract":"Real-world datasets often consist of quantitative and categorical variables. The analyst needs to focus on either kind separately or both jointly. We proposed a visualization technique tackling these challenges that supports visual cluster and set analysis. In this paper, we investigate how its visualization parameters affect the accuracy and speed of cluster and set analysis tasks in a controlled experiment. Our findings show that, with the proper settings, our visualization can support both task types well. However, we did not find settings suitable for the joint task, which provides opportunities for future research.","authors":[{"affiliations":["TU Wien, Vienna, Austria"],"email":"nikolaus.piccolotto@tuwien.ac.at","is_corresponding":true,"name":"Nikolaus Piccolotto"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"mwallinger@ac.tuwien.ac.at","is_corresponding":false,"name":"Markus Wallinger"},{"affiliations":["Institute of Visual Computing and Human-Centered Technology, Vienna, Austria"],"email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"markus.boegl@tuwien.ac.at","is_corresponding":false,"name":"Markus B\u00f6gl"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1057","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"On Combined Visual Cluster and Set Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1058","abstract":"Semantic interaction (SI) in Dimension Reduction (DR) of images allows users to incorporate feedback through direct manipulation of the 2D positions of images. Through interaction, users specify a set of pairwise relationships that the DR should aim to capture. Existing methods for images incorporate feedback into the DR through feature weights on abstract embedding features. However, if the original embedding features do not suitably capture the users task then the DR cannot either. We propose, ImageSI, an SI method for image DR that incorporates user feedback directly into the image model to update the underlying embeddings, rather than weighting them. In doing so, ImageSI ensures that the embeddings suitably capture the features necessary for the task so that the DR can subsequently organize images using those features. We present two variations of ImageSI using different loss functions - ImageSI_MDS-Inverse , which prioritizes the explicit pairwise relationships from the interaction and ImageSI_Triplet, which prioritizes clustering, using the interaction to define groups of images. Finally, we present a usage scenario and a simulation-based evaluation to demonstrate the utility of ImageSI and compare it to current methods.","authors":[{"affiliations":["Vriginia Tech, Blacksburg, United States"],"email":"jiayuelin@vt.edu","is_corresponding":false,"name":"Jiayue Lin"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":true,"name":"Rebecca Faust"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1058","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"ImageSI: Semantic Interaction for Deep Learning Image Projections","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1059","abstract":"Gantt charts are a widely-used idiom for visualizing temporal discrete event sequence data where dependencies exist between events. They are popular in domains such as manufacturing and computing for their intuitive layout of such data. However, these domains frequently generate data at scales which tax both the visual representation and the ability to render it at interactive speeds. To aid visualization developers who use Gantt charts in these situations, we develop a task taxonomy of low level visualization tasks supported by Gantt charts and connect them to the data queries needed to support them. Our taxonomy is derived through a systematic literature survey of visualizations using Gantt charts over the past 30 years.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"sayefsakin@sci.utah.edu","is_corresponding":true,"name":"Sayef Azad Sakin"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1059","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"A Literature-based Visualization Task Taxonomy for Gantt charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1062","abstract":"Annotations are a critical component of visualizations, helping viewers interpret the visual representation and highlighting critical data insights. Despite its significant role, we lack an understanding of how annotations can be incorporated into other data representations, such as physicalizations and sonifications. Given the emergent nature of these representations, sonifications and physicalizations lack formalized conventions (e.g., design space, vocabulary) that can introduce challenges for audiences to interpret the intended data encoding. To address this challenge, this work focuses on how annotations can be more tightly integrated into the design process of creating sonifications and physicalization. In an exploratory study with 13 designers, we explore how visualization annotation techniques can be adapted to sonic and physical modalities. Our work highlights how annotations for sonification and physicalizations are inseparable from their data encodings","authors":[{"affiliations":["Whitman College, Walla Walla, United States"],"email":"sorensor@whitman.edu","is_corresponding":false,"name":"Rhys Sorenson-Graff"},{"affiliations":["University of Colorado Boulder, Boulder, United States"],"email":"sandra.bae@colorado.edu","is_corresponding":true,"name":"S. Sandra Bae"},{"affiliations":["Whitman College, Walla Walla, United States"],"email":"wirfsbro@colorado.edu","is_corresponding":false,"name":"Jordan Wirfs-Brock"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1062","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Integrating Annotations into the Design Process for Sonifications and Physicalizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1064","abstract":"Large Language Models (LLMs) have demonstrated remarkable versatility in visualization authoring, but often generate suboptimal designs that are invalid or fail to adhere to design guidelines for effective visualization. We present Bavisitter, a natural language interface that integrates established visualization design guidelines into LLMs. Based on our survey on the design issues in LLM-generated visualizations, Bavisitter monitors the generated visualizations during a visualization authoring dialogue to detect an issue. When an issue is detected, it intervenes in the dialogue, suggesting possible solutions to the issue by modifying the prompts. We also demonstrate two use cases where Bavisitter detects and resolves design issues from the actual LLM-generated visualizations.","authors":[{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jiwnchoi@skku.edu","is_corresponding":true,"name":"Jiwon Choi"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"dlwodnd00@skku.edu","is_corresponding":false,"name":"Jaeung Lee"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jmjo@skku.edu","is_corresponding":false,"name":"Jaemin Jo"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1064","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1065","abstract":"Although many dimensionality reduction (DR) techniques employ stochastic methods for computational efficiency, such as negative sampling or stochastic gradient descent, their impact on the projection has been underexplored. In this work, we investigate how such stochasticity affects the stability of projections and present a novel DR technique, GhostUMAP, to measure the pointwise instability of projections. Our idea is to introduce clones of data points, \"ghosts\", into UMAP's layout optimization process. Ghosts are designed to be completely passive: they do not affect any others but are influenced by attractive and repulsive forces from the original data points. After a single optimization run, GhostUMAP can capture the projection instability of data points by measuring the variance with the projected positions of their ghosts. We also present a successive halving technique to reduce the computation of GhostUMAP. Our results suggest that GhostUMAP can reveal unstable data points with a reasonable computational overhead.","authors":[{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"mw.jung@skku.edu","is_corresponding":true,"name":"Myeongwon Jung"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"takanori.fujiwara@liu.se","is_corresponding":false,"name":"Takanori Fujiwara"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jmjo@skku.edu","is_corresponding":false,"name":"Jaemin Jo"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1065","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1068","abstract":"Integrating textual content, such as titles, annotations, and captions, with visualizations facilitates comprehension and takeaways during data exploration. Yet current tools often lack mechanisms for integrating meaningful text with visual data. This paper introduces DASH, a bimodal data exploration tool that supports integrating semantic levels into the interactive process of visualization and text-based analysis. DASH operationalizes a modified version of Lundgard et al.'s semantic hierarchy model that categorizes data descriptions into four levels ranging from basic encodings to high-level insights. By leveraging this structured semantic level framework and a large language model's text generation capabilities, DASH enables the creation of data-driven narratives via drag-and-drop user interaction. Through a preliminary user evaluation, we discuss the utility of DASH's text and chart integration capabilities when participants perform data exploration with the tool. Based on the study's feedback and observations, we discuss implications for designing unified text and chart authoring tools.","authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"bromley.denny@gmail.com","is_corresponding":true,"name":"Dennis Bromley"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1068","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1072","abstract":"Recent advancements in vision models have significantly enhanced their ability to perform complex chart understanding tasks, such as chart captioning and chart question answering. However, assessing how these models process charts remains challenging. Existing benchmarks only coarsely evaluate how well the model performs the given task without thoroughly evaluating the underlying mechanisms that drive performance, such as how models extract image embeddings. This gap limits our understanding of the model's perceptual capabilities regarding fundamental graphical components. Therefore, we introduce a novel evaluation framework designed to assess the graphical perception of image embedding models. In the context of chart comprehension, we examine two main aspects of channel effectiveness: accuracy and discriminability of various visual channels. We first assess channel accuracy through the linearity of embeddings, which is the degree to which the perceived magnitude is proportional to the size of the stimulus. % based on the assumption that perceived magnitude should be proportional to the size of Conversely, distances between embeddings serve as a measure of discriminability; embeddings that are far apart can be considered discriminable. Our experiments on a general image embedding model, CLIP, provided that it perceives channel accuracy differently from humans and demonstrated distinct discriminability in specific channels such as length, tilt, and curvature. We aim to extend our work as a more general benchmark for reliable visual encoders and enhance a model for two distinctive goals for future applications: precise chart comprehension and mimicking human perception.","authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"dtngus0111@gmail.com","is_corresponding":true,"name":"Soohyun Lee"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jangsus1@snu.ac.kr","is_corresponding":false,"name":"Minsuk Chang"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"shpark@hcil.snu.ac.kr","is_corresponding":false,"name":"Seokhyeon Park"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1072","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1078","abstract":"Data visualizations are reaching global audiences. As people who use Right-to-left (RTL) scripts constitute over a billion potential data visualization users, a need emerges to investigate how visualizations are communicated to them. Web design guidelines exist to assist designers in adapting different reading directions, yet we lack a similar standard for visualization design. This paper investigates the design patterns of visualizations with RTL scripts. We collected 128 visualizations from data-driven articles published in Arabic news outlets and analyzed their chart composition, textual elements, and sources. Our analysis suggests that designers tend to apply RTL approaches more frequently for categorical data. In other situations, we observed a mix of Left-to-right (LTR) and RTL approaches for chart directions and structures, sometimes inconsistently utilized within the same article. We reflect on this lack of clear guidelines for RTL data visualizations and derive implications for visualization authoring tools and future research directions.","authors":[{"affiliations":["University College London, London, United Kingdom","UAE University , Al Ain, United Arab Emirates"],"email":"muna.alebri.19@ucl.ac.uk","is_corresponding":true,"name":"Muna Alebri"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ntrakotondravony@wpi.edu","is_corresponding":false,"name":"No\u00eblle Rakotondravony"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ltharrison@wpi.edu","is_corresponding":false,"name":"Lane Harrison"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1078","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1079","abstract":"Image datasets serve as the foundation for machine learning models in computer vision, significantly influencing model capabilities, performance, and biases alongside architectural considerations. Therefore, understanding the composition and distribution of these datasets has become increasingly crucial. To address the need for intuitive exploration of these datasets, we propose AEye, an extensible and scalable visualization tool tailored to image datasets. AEye utilizes a contrastively trained model to embed images into semantically meaningful high-dimensional representations, facilitating data clustering and organization. To visualize the high-dimensional representations, we project them onto a two-dimensional plane and arrange images in layers so users can seamlessly navigate and explore them interactively. Furthermore, AEye facilitates semantic search functionalities for both text and image queries, enabling users to search for content. We open-source the codebase for AEye, and provide a simple configuration to add additional datasets.","authors":[{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"fgroetschla@ethz.ch","is_corresponding":false,"name":"Florian Gr\u00f6tschla"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"lanzendoerfer@ethz.ch","is_corresponding":false,"name":"Luca A Lanzend\u00f6rfer"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"mcalzavara@student.ethz.ch","is_corresponding":false,"name":"Marco Calzavara"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"wattenhofer@ethz.ch","is_corresponding":false,"name":"Roger Wattenhofer"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1079","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"AEye: A Visualization Tool for Image Datasets","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1081","abstract":"Sine illusion happens when the more quickly changing pairs of lines lead to bigger underestimates of the delta between them. We evaluate three visual manipulations on mitigating sine illusions: dotted lines, aligned gridlines, and offset gridlines via a user study. We asked participants to compare the deltas between two lines at two time points and found aligned gridlines to be the most effective in mitigating sine illusions. Using data from the user study, we produced a model that predicts the impact of the sine illusion in line charts by accounting for the ratio of the vertical distance between the two points of comparison. When the ratio is less than 50\\%, participants begin to be influenced by the sine illusion. This effect can be significantly exacerbated when the difference between the two deltas falls under 30\\%. We compared two explanations for the sine illusion based on our data: either participants were mistakenly using the perpendicular distance between the two lines to make their comparison (the perpendicular explanation), or they incorrectly relied on the length of the line segment perpendicular to the angle bisector of the bottom and top lines (the equal triangle explanation). We found the equal triangle explanation to be the more predictive model explaining participant behaviors.","authors":[{"affiliations":["Google LLC, San Francisco, United States"],"email":"cknit1999@gmail.com","is_corresponding":false,"name":"Clayton J Knittel"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"jawuah3@gatech.edu","is_corresponding":false,"name":"Jane Awuah"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"franconeri@northwestern.edu","is_corresponding":false,"name":"Steven L Franconeri"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"cxiong@gatech.edu","is_corresponding":true,"name":"Cindy Xiong Bearfield"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1081","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Gridlines Mitigate Sine Illusion in Line Charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1089","abstract":"In healthcare, AI techniques are widely used for tasks like risk assessment and anomaly detection. Despite AI's potential as a valuable assistant, its role in complex medical data analysis often oversimplifies human-AI collaboration dynamics. To address this, we collaborated with a local hospital, engaging six physicians and one data scientist in a formative study. From this collaboration, we propose a framework integrating two-phase interactive visualization systems: one for Human-Led, AI-Assisted Retrospective Analysis and another for AI-Mediated, Human-Reviewed Iterative Modeling. This framework aims to enhance understanding and discussion around effective human-AI collaboration in healthcare.","authors":[{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"ouyy@shanghaitech.edu.cn","is_corresponding":true,"name":"Yang Ouyang"},{"affiliations":["University of Illinois at Urbana-Champaign, Champaign, United States","University of Illinois at Urbana-Champaign, Champaign, United States"],"email":"zhang414@illinois.edu","is_corresponding":false,"name":"Chenyang Zhang"},{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"wanghe1@shanghaitech.edu.cn","is_corresponding":false,"name":"He Wang"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"15301050137@fudan.edu.cn","is_corresponding":false,"name":"Tianle Ma"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"cjiang_fdu@yeah.net","is_corresponding":false,"name":"Chang Jiang"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"522649732@qq.com","is_corresponding":false,"name":"Yuheng Yan"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"yan.zuoqin@zs-hospital.sh.cn","is_corresponding":false,"name":"Zuoqin Yan"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong","Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"mxj@cse.ust.hk","is_corresponding":false,"name":"Xiaojuan Ma"},{"affiliations":["Southeast University, Nanjing, China","Southeast University, Nanjing, China"],"email":"cshiag@connect.ust.hk","is_corresponding":false,"name":"Chuhan Shi"},{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1089","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1090","abstract":"Visualizing high dimensional data is challenging, since any dimensionality reduction technique will distort distances. A classic method in cartography\u2013Tissot\u2019s Indicatrix, specific to sphere-to-plane maps\u2013visualizes distortion using ellipses. Inspired by this idea, we describe the hypertrix: a method for representing distortions that occur when data is projected from arbitrarily high dimensions onto a 2D plane. We demonstrate our technique through synthetic and real-world datasets, and describe how this indicatrix can guide interpretations of nonlinear dimensionality reduction","authors":[{"affiliations":["Harvard University, Boston, United States"],"email":"sraval@g.harvard.edu","is_corresponding":true,"name":"Shivam Raval"},{"affiliations":["Harvard University, Cambridge, United States","Google Research, Cambridge, United States"],"email":"viegas@google.com","is_corresponding":false,"name":"Fernanda Viegas"},{"affiliations":["Harvard University, Cambridge, United States","Google Research, Cambridge, United States"],"email":"wattenberg@gmail.com","is_corresponding":false,"name":"Martin Wattenberg"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1090","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Hypertrix: An indicatrix for high-dimensional visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1096","abstract":"Coordinated multiple views (CMV) in a visual analytics system can help users explore multiple data representations simultaneously with linked interactions. However, the implementation of coordinated multiple views can be challenging. Without standard software libraries, visualization designers need to re-implement CMV during the development of each system. We introduce use-coordination, a grammar and software library that supports the efficient implementation of CMV. The grammar defines a JSON-based representation for an abstract coordination model from the information visualization literature. We contribute an optional extension to the model and grammar that allows for hierarchical coordination. Through three use cases, we show that use-coordination enables implementation of CMV in systems containing not only basic statistical charts but also more complex visualizations such as medical imaging volumes. We describe six software extensions, including a graphical editor for manipulation of coordination, which showcase the potential to build upon our coordination-focused declarative approach.","authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"mark_keller@hms.harvard.edu","is_corresponding":true,"name":"Mark S Keller"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"trevor_manz@g.harvard.edu","is_corresponding":false,"name":"Trevor Manz"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1096","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1097","abstract":"Visualization tools now commonly present automated insights highlighting salient data patterns, including correlations, distributions, outliers, and differences, among others. While these insights are valuable for data exploration and chart interpretation, users currently only have a binary choice of accepting or rejecting them, lacking the flexibility to refine the system logic or customize the insight generation process. To address this limitation, we present GROOT, a prototype system that allows users to proactively specify and refine automated data insights. The system allows users to directly manipulate chart elements to receive insight recommendations based on their selections. Additionally, GROOT provides users with a manual editing interface to customize, reconfigure, or add new insights to individual charts and propagate them to future explorations. We describe a usage scenario to illustrate how these features collectively support insight editing and configuration, and discuss opportunities for future work including incorporating LLMs, improving semantic data and visualization search, and supporting insight management.","authors":[{"affiliations":["University of Maryland, College Park, College Park, United States","Tableau Research, Seattle, United States"],"email":"sgathani@cs.umd.edu","is_corresponding":true,"name":"Sneha Gathani"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"amcrisan@uwaterloo.ca","is_corresponding":false,"name":"Anamaria Crisan"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"arjun.srinivasan.10@gmail.com","is_corresponding":false,"name":"Arjun Srinivasan"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1097","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Groot: An Interface for Editing and Configuring Automated Data Insights","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1100","abstract":"Confidence scores of automatic speech recognition (ASR) outputs are often inadequately communicated, preventing its seamless integration into analytical workflows. In this paper, we introduce ConFides, a visual analytic system developed in collaboration with intelligence analysts to address this issue. ConFides aims to aid exploration and post-AI-transcription editing by visually representing the confidence associated with the transcription. We demonstrate how our tool can assist intelligence analysts who use ASR outputs in their analytical and exploratory tasks and how it can help mitigate misinterpretation of crucial information. We also discuss opportunities for improving textual data cleaning and model transparency for human-machine collaboration.","authors":[{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"sha@wustl.edu","is_corresponding":true,"name":"Sunwoo Ha"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"chaelim@wustl.edu","is_corresponding":false,"name":"Chaehun Lim"},{"affiliations":["Smith College, Northampton, United States"],"email":"jcrouser@smith.edu","is_corresponding":false,"name":"R. Jordan Crouser"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"alvitta@wustl.edu","is_corresponding":false,"name":"Alvitta Ottley"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1100","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"ConFides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1101","abstract":"Color coding, a technique assigning specific colors to different information types, has proven advantages in aiding human cognitive activities, especially reading and comprehension. The rise of Large Language Models (LLMs) has streamlined document coding, enabling simple automatic text labeling with various schemes. This has the potential to make color-coding more accessible and benefit more users. However, the importance of color choice, particularly in aiding textual information seeking through various color schemes, is not well studied. This paper presents a user study assessing the effectiveness of various color schemes generated by different base colors for readers' information-seeking performance in text documents color-coded by LLMs. Participants performed information-seeking tasks within scholarly papers' abstracts, each coded with a different scheme under time constraints. Results showed that non-analogous color schemes lead to better information-seeking performance, in both accuracy and response time. Yellow-inclusive color schemes lead to shorter response times and are also preferred by most participants. These could inform the better choice of color scheme for annotating text documents. As LLMs advance document coding, we advocate for more research focusing on the \"color\" aspect of color-coding techniques.","authors":[{"affiliations":["Pennsylvania State University, University Park, United States"],"email":"samnghoyin@gmail.com","is_corresponding":true,"name":"Ho Yin Ng"},{"affiliations":["Pennsylvania State University, University Park, United States"],"email":"zmh5268@psu.edu","is_corresponding":false,"name":"Zeyu He"},{"affiliations":["Pennsylvania State University, University Park , United States"],"email":"txh710@psu.edu","is_corresponding":false,"name":"Ting-Hao Kenneth Huang"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1101","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1109","abstract":"Homophily refers to the tendency of individuals to associate with others who are similar to them in characteristics, such as, race, ethnicity, age, gender, or interests. In this paper, we investigate if individuals exhibit racial homophily when viewing visualizations, using mass shooting data in the United States as the example topic. We conducted a crowdsourced experiment (N=450) where each participant was shown a visualization displaying the counts of mass shooting victims, highlighting the counts for one of three racial groups (White, Black, or Hispanic). Participants were assigned to view visualizations highlighting their own race or a different race to assess the influence of racial concordance on changes in affect (emotion) and attitude towards gun control. While we did not find evidence of homophily, the results showed a significant negative shift in affect across all visualization conditions. Notably, political ideology significantly impacted changes in affect, with more liberal views correlating with a more negative affect change. Our findings underscore the complexity of reactions to mass shooting visualizations and highlight the need for additional measures for understanding homophily in visualizations.","authors":[{"affiliations":["New York University, Brooklyn, United States"],"email":"pt2393@nyu.edu","is_corresponding":true,"name":"Poorna Talkad Sukumar"},{"affiliations":["New York University, Brooklyn, United States"],"email":"mporfiri@nyu.edu","is_corresponding":false,"name":"Maurizio Porfiri"},{"affiliations":["New York University, New York, United States"],"email":"onov@nyu.edu","is_corresponding":false,"name":"Oded Nov"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1109","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Connections Beyond Data: Exploring Homophily With Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1114","abstract":"As visualization literacy and its implications gain prominence, we need effective methods to teach and prepare students for the variety of visualizations they might encounter in an increasingly data-driven world. Recently, the potential of comics has been recognized in various data visualization contexts, including educational settings. In this paper, we describe the development of a workshop in which we use our \u201ccomic construction kit\u201d as a tool for students to understand various data visualization techniques through an interactive creative approach of creating explanatory comics. We report on our insights and learnings from holding eight workshops with high school students, high school teachers, university students, and university lecturers, aiming to enhance the landscape of hands-on visualization activities that can enrich the visualization classroom. The comic construction kit and all supplemental materials are open source under a CC-BY license and available at https://fhstp.github.io/comixplain/vis4schools.html.","authors":[{"affiliations":["St. P\u00f6lten University of Applied Sciences, St. P\u00f6lten, Austria"],"email":"magdalena.boucher@fhstp.ac.at","is_corresponding":true,"name":"Magdalena Boucher"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"christina.stoiber@fhstp.ac.at","is_corresponding":false,"name":"Christina Stoiber"},{"affiliations":["School of Informatics, Communications and Media, Hagenberg im M\u00fchlkreis, Austria"],"email":"mandy.keck@fh-hagenberg.at","is_corresponding":false,"name":"Mandy Keck"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"victor.oliveira@fhstp.ac.at","is_corresponding":false,"name":"Victor Adriel de Jesus Oliveira"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"wolfgang.aigner@fhstp.ac.at","is_corresponding":false,"name":"Wolfgang Aigner"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1114","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1116","abstract":"Visualizations support rapid analysis of scientific datasets, allowing viewers to glean aggregate information (e.g., the mean) within split-seconds. While prior research has explored this ability in conventional charts, it is unclear if spatial visualizations used by computational scientists afford a similar ensemble perception capacity. We investigate people's ability to estimate two summary statistics, mean and variance, from pseudocolor scalar fields. In a crowdsourced experiment, we find that participants can reliably characterize both statistics, although variance discrimination requires a much stronger signal. Multi-hue and diverging colormaps outperformed monochromatic, luminance ramps in aiding this extraction. Analysis of qualitative responses suggests that participants often estimate the distribution of hotspots and valleys as visual proxies for data statistics. These findings suggest that people's summary interpretation of spatial datasets is likely driven by the appearance of discrete color segments, rather than assessments of overall luminance. Implicit color segmentation in quantitative displays could thus prove more useful than previously assumed by facilitating quick, gist-level judgments about color-coded visualizations.","authors":[{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"vmateevitsi@anl.gov","is_corresponding":false,"name":"Victor A. Mateevitsi"},{"affiliations":["Argonne National Laboratory, Lemont, United States","University of Illinois Chicago, Chicago, United States"],"email":"papka@anl.gov","is_corresponding":false,"name":"Michael E. Papka"},{"affiliations":["Indiana University, Indianapolis, United States"],"email":"redak@iu.edu","is_corresponding":true,"name":"Khairi Reda"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1116","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Science in a Blink: Supporting Ensemble Perception in Scalar Fields","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1117","abstract":"Geovisualizations are powerful tools for exploratory spatial analysis, enabling sighted users to discern patterns, trends, and relationships within geographic data. However, these visual tools have remained largely inaccessible to screen-reader users. We present AltGeoViz, a new system we designed to facilitate geovisualization exploration for these users. AltGeoViz dynamically generates alt-text descriptions based on the user's current map view, providing summaries of spatial patterns and descriptive statistics. In a study of five screen-reader users, we found that AltGeoViz enabled them to interact with geovisualizations in previously infeasible ways. Participants demonstrated a clear understanding of data summaries and their location context, and they could synthesize spatial understandings of their explorations. Moreover, we identified key areas for improvement, such as the addition of intuitive spatial navigation controls and comparative analysis features.","authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"chuchuli@cs.washington.edu","is_corresponding":true,"name":"Chu Li"},{"affiliations":["University of Washington, Seattle, United States"],"email":"ypang2@cs.washington.edu","is_corresponding":false,"name":"Rock Yuren Pang"},{"affiliations":["University of Washington, Seattle, United States"],"email":"asharif@cs.washington.edu","is_corresponding":false,"name":"Ather Sharif"},{"affiliations":["University of Washington, Seattle, United States"],"email":"chheda@cs.washington.edu","is_corresponding":false,"name":"Arnavi Chheda-Kothary"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jonf@cs.uw.edu","is_corresponding":false,"name":"Jon E. Froehlich"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1117","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"AltGeoViz: Facilitating Accessible Geovisualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1119","abstract":"Analyzing uncertainty in spatial data is a vital task in many domains, as for example with climate and weather simulation ensembles. Although there are many methods to support the analysis of the uncertainty, such as uncertain isocontours or calculation of statistical values, it is still a challenge to get an overview of the uncertainty and then decide a further method or parameter to analyze the data, or investigate further some region or point of interest. We present cumulative height fields, a visualization method for 2D scalar field ensembles using the marginal empirical distribution function and show preliminary results using volume rendering and slicing for the Max Planck Institute Grand Ensemble.","authors":[{"affiliations":["Institute of Computer Science, Leipzig University, Leipzig, Germany"],"email":"daetz@informatik.uni-leipzig.de","is_corresponding":true,"name":"Tomas Rodolfo Daetz Chacon"},{"affiliations":["German Climate Computing Center (DKRZ), Hamburg, Germany"],"email":"boettinger@dkrz.de","is_corresponding":false,"name":"Michael B\u00f6ttinger"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"scheuermann@informatik.uni-leipzig.de","is_corresponding":false,"name":"Gerik Scheuermann"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"heine@informatik.uni-leipzig.de","is_corresponding":false,"name":"Christian Heine"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1119","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1121","abstract":"Many real-world networks contain structurally-equivalent nodes. These are defined as vertices that share the same set of neighboring nodes, making them interchangeable with a traditional graph layout approach. However, many real-world graphs also have properties associated with nodes, adding additional meaning to them. We present an approach for swapping locations of structurally-equivalent nodes in graph layout so that those with more similar properties have closer proximity to each other. This improves the usefulness of the visualization from an attribute perspective without negatively impacting the visualization from a structural perspective. We include an algorithm for finding these sets of nodes in linear time, as well as methodologies for ordering nodes based on their attribute similarity, which works for scalar, ordinal, multidimensional, and categorical data.","authors":[{"affiliations":["Pacific Northwest National Lab, Richland, United States"],"email":"patrick.mackey@pnnl.gov","is_corresponding":true,"name":"Patrick Mackey"},{"affiliations":["University of Arizona, Tucson, United States","Pacific Northwest National Laboratory, Richland, United States"],"email":"jacobmiller1@arizona.edu","is_corresponding":false,"name":"Jacob Miller"},{"affiliations":["Pacific Northwest National Laboratory, Richland, United States"],"email":"liz.f@pnnl.gov","is_corresponding":false,"name":"Liz Faultersack"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1121","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1126","abstract":"Psychological research often involves understanding psychological constructs through conducting factor analysis on data collected by a questionnaire, which can comprise hundreds of questions. Without interactive systems for interpreting factor models, researchers are frequently exposed to subjectivity, potentially leading to misinterpretations or overlooked crucial information. This paper introduces FAVis, a novel interactive visualization tool designed to aid researchers in interpreting and evaluating factor analysis results. FAVis enhances the understanding of relationships between variables and factors by supporting multiple views for visualizing factor loadings and correlations, allowing users to analyze information from various perspectives. The primary feature of FAVis is to enable users to set optimal thresholds for factor loadings to balance clarity and information retention. FAVis also allows users to assign tags to variables, enhancing the understanding of factors by linking them to their associated psychological constructs. We conduct a case study on a dataset from the Motivational State Questionnaire, utilizing a three-factor common factor model. Our user study demonstrates the utility of FAVis in various tasks.","authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States","University of Notre Dame, Notre Dame, United States"],"email":"ylu22@nd.edu","is_corresponding":true,"name":"Yikai Lu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1126","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"FAVis: Visual Analytics of Factor Analysis for Psychological Research","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1127","abstract":"In this paper, we analyze the Apple Vision Pro hardware and the visionOS software platform, assessing their capabilities for volume rendering of structured grids, a prevalent technique across various applications. The Apple Vision Pro supports multiple display modes, from classical augmented reality (AR) using video see-through technology to immersive virtual reality (VR) environments that exclusively render virtual objects. These modes utilize different APIs and exhibit distinct capabilities. Our focus is on direct volume rendering, selected for its implementation challenges due to the native graphics APIs being predominantly oriented towards surface shading. Volume rendering is particularly vital in fields where AR and VR visualizations offer substantial benefits, such as in medicine and manufacturing. Despite its initial high cost, we anticipate that the Vision Pro will become more accessible and affordable over time, following Apple's track record of market expansion. As these devices become more prevalent, understanding how to effectively program and utilize them becomes increasingly important, offering significant opportunities for innovation and practical applications in various sectors.","authors":[{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"camilla.hrycak@uni-due.de","is_corresponding":true,"name":"Camilla Hrycak"},{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"david.lewakis@stud.uni-due.de","is_corresponding":false,"name":"David Lewakis"},{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"jens.krueger@uni-due.de","is_corresponding":false,"name":"Jens Harald Krueger"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1127","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1130","abstract":"Visualization, from simple line plots to complex high-dimensional visual analysis systems, has established itself throughout numerous domains to explore, analyze, and evaluate data. Applying such visualizations in the context of simulation science where High-Performance Computing (HPC) produces ever-growing amounts of data that is more complex, potentially multidimensional, and multi-modal, takes up resources and a high level of technological experience often not available to domain experts. In this work, we present DaVE - a curated database of visualization examples, which aims to provide state-of-the-art and advanced visualization methods that arise in the context of HPC applications. Based on domain- or data-specific descriptors entered by the user, DaVE provides a list of appropriate visualization techniques, each accompanied by descriptions, examples, references, and resources. Sample code, adaptable container templates, and recipes for easy integration in HPC applications can be downloaded for easy access to high-fidelity visualizations. While the database is currently filled with a limited number of entries based on a broad evaluation of needs and challenges of current HPC users, DaVE is designed to be easily extended by experts from both the visualization and HPC communities.","authors":[{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"koenen@informatik.rwth-aachen.de","is_corresponding":true,"name":"Jens Koenen"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"m.petersen@rptu.de","is_corresponding":false,"name":"Marvin Petersen"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"garth@rptu.de","is_corresponding":false,"name":"Christoph Garth"},{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"gerrits@vis.rwth-aachen.de","is_corresponding":false,"name":"Tim Gerrits"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1130","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"DaVE - A Curated Database of Visualization Examples","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1135","abstract":"Humans struggle to perceive and interpret high-dimensional data. Therefore, high-dimensional data are often projected into two dimensions for visualization. Many applications benefit from complex nonlinear dimensionality reduction techniques, but the effects of individual high-dimensional features are hard to explain in the two-dimensional space. Most visualization solutions use multiple two-dimensional plots, each showing the effect of one high-dimensional feature in two dimensions; this approach creates a need for a visual inspection of k plots for a k-dimensional input space. Our solution, Feature Clock, provides a novel approach that eliminates the need to inspect these k plots to grasp the influence of original features on the data structure depicted in two dimensions. Feature Clock enhances the explainability and compactness of visualizations of embedded data and is available in an open-source Python library.","authors":[{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"ovcharenko.folga@gmail.com","is_corresponding":true,"name":"Olga Ovcharenko"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"rita.sevastjanova@uni-konstanz.de","is_corresponding":false,"name":"Rita Sevastjanova"},{"affiliations":["ETH Zurich, Z\u00fcrich, Switzerland"],"email":"valentina.boeva@inf.ethz.ch","is_corresponding":false,"name":"Valentina Boeva"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1135","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Feature Clock: High-Dimensional Effects in Two-Dimensional Plots","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1144","abstract":"Reconstruction of 3D scenes from 2D images is a technical challenge that impacts domains from Earth and planetary sciences and space exploration to augmented and virtual reality. Typically, reconstruction algorithms first identify common features across images and then minimize reconstruction errors after estimating the shape of the terrain. This bundle adjustment (BA) step optimizes around a single, simplifying scalar value that obfuscates many possible causes of reconstruction errors (e.g., initial estimate of the position and orientation of the camera, lighting conditions, ease of feature detection in the terrain). Reconstruction errors can lead to inaccurate scientific inferences or endanger a spacecraft exploring a remote environment. To address this challenge, we present VECTOR, a visual analysis tool that improves error inspection for stereo reconstruction BA. VECTOR provides analysts with previously unavailable visibility into feature locations, camera pose, and computed 3D points. VECTOR was developed in partnership with the Perseverance Mars Rover and Ingenuity Mars Helicopter terrain reconstruction team at the NASA Jet Propulsion Laboratory. We report on how this tool was used to debug and improve terrain reconstruction for the Mars 2020 mission.","authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"racquel.fygenson@gmail.com","is_corresponding":false,"name":"Racquel Fygenson"},{"affiliations":["Weta FX, Auckland, New Zealand"],"email":"kjawad@andrew.cmu.edu","is_corresponding":false,"name":"Kazi Jawad"},{"affiliations":["Art Center, Pasadena, United States"],"email":"zongzhanisabelli@gmail.com","is_corresponding":false,"name":"Zongzhan Li"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"francois.ayoub@jpl.nasa.gov","is_corresponding":false,"name":"Francois Ayoub"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"bob.deen@jpl.nasa.gov","is_corresponding":false,"name":"Robert G Deen"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"sd@scottdavidoff.com","is_corresponding":false,"name":"Scott Davidoff"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["NASA-JPL, Pasadena, United States"],"email":"mauricio.a.hess.flores@jpl.nasa.gov","is_corresponding":true,"name":"Mauricio Hess-Flores"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1144","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Opening the black box of 3D reconstruction error analysis with VECTOR","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1146","abstract":"Millions of runners rely on smart watches that display running-related metrics such as pace, heart rate and distance for training and racing -- mostly with text and numbers. Although research tells us that visualizations are a good alternative to text on smart watches, we know little about how visualizations can help in realistic running scenarios. We conducted a study in which 20 runners completed running-related tasks on an outdoor track using both text and visualizations. Our results show that runners are 1.5 to 8 times faster in completing those tasks with visualizations than with text, prefer visualizations to text, and would use such visualizations while running -- were they available on their smart watch.","authors":[{"affiliations":["University of Victoria, Victoria, Canada"],"email":"sarinaksj@uvic.ca","is_corresponding":false,"name":"Sarina Kashanj"},{"affiliations":["University of Victoria, Victoira, Canada","Delft University of Technology, Delft, Netherlands"],"email":"xiyao.wang23@gmail.com","is_corresponding":false,"name":"Xiyao Wang"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1146","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Visualizations on Smart Watches while Running: It Actually Helps!","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1150","abstract":"Exploratory visual data analysis tools empower data analysts to efficiently and intuitively explore data insights throughout the entire analysis cycle. However, the gap between common programmatic analysis (e.g., within computational notebooks) and exploratory visual analysis leads to a disjointed and inefficient data analysis experience. To bridge this gap, we developed PyGWalker, a Python library that offers on-the-fly assistance for exploratory visual data analysis. It features a lightweight and intuitive GUI with a shelf builder modality. Its loosely coupled architecture supports multiple computational environments to accommodate varying data sizes. Since its release in February 2023, PyGWalker has gained much attention, with 468k downloads on PyPI and over 9.8k stars on GitHub as of April 2024. This demonstrates its value to the data science and visualization community, with researchers and developers integrating it into their own applications and studies.","authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China","Kanaries Data Inc., Hangzhou, China"],"email":"yue.yu@connect.ust.hk","is_corresponding":true,"name":"Yue Yu"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"lshenaj@connect.ust.hk","is_corresponding":false,"name":"Leixian Shen"},{"affiliations":["Kanaries Data Inc., Hangzhou, China"],"email":"feilong@kanaries.net","is_corresponding":false,"name":"Fei Long"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":["Kanaries Data Inc., Hangzhou, China"],"email":"haochen@kanaries.net","is_corresponding":false,"name":"Hao Chen"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1150","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1155","abstract":"Augmented reality (AR) area labels can highlight real-life objects, visualize real world regions with arbitrary boundaries, and show invisible objects or features. Environment conditions such as lighting and clutter can decrease fixed or passive label visibility, and labels that have high opacity levels can occlude crucial details in the environment. We design and evaluate active AR area label visualization modes to enhance visibility across real-life environments, while still retaining environment details within the label. For this, we define a distant characteristic color from the environment in perceptual CIELAB space, then introduce spatial variations among label pixel colors based on the underlying environment variation. In a user study with 18 participants, we discovered that our active label visualization modes can be comparable in visibility to a fixed green baseline by Gabbard et al., and can outperform it with added spatial variation in cluttered environments, across varying levels of lighting (e.g., nighttime), and in environments with colors similar to the fixed baseline color.","authors":[{"affiliations":["Brown University, Providence, United States"],"email":"hojung_kwon@brown.edu","is_corresponding":false,"name":"Hojung Kwon"},{"affiliations":["Brown University, Providence, United States"],"email":"yuanbo_li@brown.edu","is_corresponding":false,"name":"Yuanbo Li"},{"affiliations":["Brown University, Providence, United States"],"email":"chloe_ye2019@hotmail.com","is_corresponding":false,"name":"Xiaohan Ye"},{"affiliations":["Brown University, Providence, United States"],"email":"praccho_muna-mcquay@brown.edu","is_corresponding":false,"name":"Praccho Muna-McQuay"},{"affiliations":["Duke University, Durham, United States"],"email":"liuren.yin@duke.edu","is_corresponding":false,"name":"Liuren Yin"},{"affiliations":["Brown University, Providence, United States"],"email":"james_tompkin@brown.edu","is_corresponding":true,"name":"James Tompkin"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1155","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1156","abstract":"Compound graphs are networks in which vertices can be grouped into larger subsets, with these subsets capable of further grouping, resulting in a nesting that can be many levels deep. Such graphs arise in several applications including biological workflows, chemical equations, and computational data flow analysis. Common layouts prioritize the lowest level of the grouping, down to the individual ungrouped vertices, which can make the higher level grouped structures more difficult to discern, especially in deeply nested networks. We contribute an overview+detail layout that preserves the saliency of the higher level network structure when groups are expanded to show internal nested structure. Our layout draws inner structures adjacent to their parents, using a modified tree layout to place substructures. We describe our algorithm and then present case studies demonstrating the layout's utility to a domain expert working on data flow analysis. Finally, we discuss network parameters and analysis situations in which our layout is well suited.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"hatch.on27@gmail.com","is_corresponding":true,"name":"Chang Han"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"lieffers@arizona.edu","is_corresponding":false,"name":"Justin Lieffers"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"claytonm@arizona.edu","is_corresponding":false,"name":"Clayton Morrison"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1156","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"An Overview + Detail Layout for Visualizing Compound Graphs","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1159","abstract":"With two studies, we assess how different walking trajectories (straight line, circular, and infinity) and speeds (2 km/h, 4 km/h, and 6 km/h) influence the accuracy and response time of participants reading micro visualizations on a smartwatch. We showed our participants common watch face micro visualizations including date, time, weather information, and four complications showing progress charts of fitness data. Our findings suggest that while walking trajectories did not significantly affect reading performance, overall walking activity, especially at high speeds, hurt reading accuracy and, to some extent, response time.","authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"fairouz.grioui@vis.uni-stuttgart.de","is_corresponding":true,"name":"Fairouz Grioui"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"research@blascheck.eu","is_corresponding":false,"name":"Tanja Blascheck"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"yaolijie0219@gmail.com","is_corresponding":false,"name":"Lijie Yao"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1159","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1161","abstract":"Digital twins are an excellent tool to model, visualize, and simulate complex systems, to understand and optimize their operation. In this work, we present the technical challenges of real-time visualization of a digital twin of the Frontier supercomputer. We show the initial prototype and current state of the twin and highlight technical design challenges of visualizing such a large High Performance Computing (HPC) system. The goal is to understand the use of augmented reality as a primary way to extract information and collaborate on digital twins of complex systems. This leverages the spatio-temporal aspect of a 3D representation of a digital twin, with the ability to view historical and real-time telemetry, triggering simulations of a system state and viewing the results, which can be augmented via dashboards for details. Finally, we discuss considerations and opportunities for augmented reality of digital twins of large-scale, parallel computers.","authors":[{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"maiterthm@ornl.gov","is_corresponding":true,"name":"Matthias Maiterth"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"brewerwh@ornl.gov","is_corresponding":false,"name":"Wes Brewer"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"dewetd@ornl.gov","is_corresponding":false,"name":"Dane De Wet"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"greenwoodms@ornl.gov","is_corresponding":false,"name":"Scott Greenwood"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kumarv@ornl.gov","is_corresponding":false,"name":"Vineet Kumar"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"hinesjr@ornl.gov","is_corresponding":false,"name":"Jesse Hines"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"bouknightsl@ornl.gov","is_corresponding":false,"name":"Sedrick L Bouknight"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"wangz@ornl.gov","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Hewlett Packard Enterprise, Berkshire, United Kingdom"],"email":"tim.dykes@hpe.com","is_corresponding":false,"name":"Tim Dykes"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"fwang2@ornl.gov","is_corresponding":false,"name":"Feiyi Wang"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1161","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1163","abstract":"Integral curves have been widely used to represent and analyze various vector fields. Curve-based clustering and pattern search approaches are usually applied to aid the identification of meaningful patterns from large numbers of integral curves. However, they need not support an interactive, level-of-detail exploration of these patterns. To address this, we propose a Curve Segment Neighborhood Graph (CSNG) to capture the relationships between neighboring curve segments. This graph representation enables us to adapt the fast community detection algorithm, i.e., the Louvain algorithm, to identify individual graph communities from CSNG. Our results show that these communities often correspond to the features of the flow. To achieve a multi-level interactive exploration of the detected communities, we adapt a force-directed layout that allows users to refine and re-group communities based on their domain knowledge. We incorporate the proposed techniques into an interactive system to enable effective analysis and interpretation of complex patterns in large-scale integral curve datasets.","authors":[{"affiliations":["University of Houston, Houston, United States"],"email":"nguyenpkk95@gmail.com","is_corresponding":true,"name":"Nguyen K Phan"},{"affiliations":["University of Houston, Houston, United States"],"email":"chengu@cs.uh.edu","is_corresponding":false,"name":"Guoning Chen"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1163","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Curve Segment Neighborhood-based Vector Field Exploration","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1166","abstract":"Custom animated visualizations of large, complex datasets are helpful across many domains, but they are hard to develop. Much of the difficulty arises from maintaining visualization state across a large set of animated graphical elements that may change in number over time. We contribute Counterpoint, a framework for state management designed to help implement such visualizations in JavaScript. Using Counterpoint, developers can manipulate large collections of marks with reactive attributes that are easy to render in scalable APIs such as Canvas and WebGL. Counterpoint also helps orchestrate the entry and exit of graphical elements using the concept of a rendering \"stage.\" Through a performance evaluation, we show that Counterpoint adds minimal overhead over current high-performance rendering techniques while simplifying implementation. We also provide two examples of visualizations created using Counterpoint that illustrate its flexibility and compatibility with other visualization toolkits as well as considerations for users with disabilities. Counterpoint is open-source and available at https://github.com/cmudig/counterpoint.","authors":[{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"vsivaram@andrew.cmu.edu","is_corresponding":true,"name":"Venkatesh Sivaraman"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"fje@cmu.edu","is_corresponding":false,"name":"Frank Elavsky"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"adamperer@cmu.edu","is_corresponding":false,"name":"Adam Perer"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1166","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1173","abstract":"Visualizing citation relations with network structures is widely used, but the visual complexity can make it challenging for individual researchers to navigate through them. We collected data from 18 researchers using an interface that we designed using network simplification methods and analyzed how users browsed and identified important papers. Our analysis reveals six major patterns used for identifying papers of interest, which can be categorized into three key components: Fields, Bridges, and Foundations, each viewed from two distinct perspectives: layout-oriented and connection-oriented. The connection-oriented approach was found to be more effective for selecting relevant papers, but the layout-oriented method was adopted more often, even though it led to unexpected results and user frustration. Our findings emphasize the importance of integrating these components and the necessity to balance visual layouts with meaningful connections to enhance the effectiveness of citation networks in academic browsing systems.","authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"krchoe@hcil.snu.ac.kr","is_corresponding":true,"name":"Kiroong Choe"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"gracekim027@snu.ac.kr","is_corresponding":false,"name":"Eunhye Kim"},{"affiliations":["Dept. of Electrical and Computer Engineering, SNU, Seoul, Korea, Republic of"],"email":"paulmoguri@snu.ac.kr","is_corresponding":false,"name":"Sangwon Park"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1173","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1177","abstract":"The proliferation of misleading visualizations online, particularly during critical events like public health crises and elections, poses a significant risk of misinformation. This work investigates the capability of GPT-4V to detect misleading visualizations. Utilizing a dataset of tweet-visualization pairs with various visual misleaders, we tested GPT-4V under four experimental conditions: naive zero-shot, naive few-shot, guided zero-shot, and guided few-shot. Our results demonstrate that GPT-4V can detect misleading visualizations with moderate accuracy without prior training (naive zero-shot) and that performance considerably improves by providing the model with the definitions of misleaders (guided zero-shot). However, combining definitions with examples of misleaders (guided few-shot) did not yield further improvements. This study underscores the feasibility of using large vision-language models such as GTP-4V to combat misinformation and emphasizes the importance of optimizing prompt engineering to enhance detection accuracy.","authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"jhalexander@umass.edu","is_corresponding":false,"name":"Jason Huang Alexander"},{"affiliations":["University of Masssachusetts Amherst, Amherst, United States"],"email":"phnanda@umass.edu","is_corresponding":false,"name":"Priyal H Nanda"},{"affiliations":["Northeastern University, Boston, United States"],"email":"yangkc@iu.edu","is_corresponding":false,"name":"Kai-Cheng Yang"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"asarv@cs.umass.edu","is_corresponding":true,"name":"Ali Sarvghad"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1177","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Can GPT-4V Detect Misleading Visualizations?","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1183","abstract":"An atmospheric front is an imaginary surface that separates two distinct air masses and is commonly defined as the warm-air side of a frontal zone with high gradients of atmospheric temperature and humidity. These fronts are a widely used conceptual model in meteorology, which are often encountered in the literature as two-dimensional (2D) front lines on surface analysis charts. This paper presents a method for computing three-dimensional (3D) atmospheric fronts as surfaces that is capable of extracting continuous and well-confined features suitable for 3D visual analysis, spatio-temporal tracking, and statistical analyses. Recently developed contour-based methods for 3D front extraction rely on computing the third derivative of a moist potential temperature field. Additionally, they require the field to be smoothed to obtain continuous large-scale structures. This paper demonstrates the feasibility of an alternative method to front extraction using ridge surface computation. The proposed method requires only the sec- ond derivative of the input field and produces accurate structures even from unsmoothed data. An application of the ridge-based method to a data set corresponding to Cyclone Friederike demonstrates its benefits and utility towards visual analysis of the full 3D structure of fronts.","authors":[{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"anne.gossing@fu-berlin.de","is_corresponding":true,"name":"Anne Gossing"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"andreas.beckert@uni-hamburg.de","is_corresponding":false,"name":"Andreas Beckert"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"christoph.fischer-1@uni-hamburg.de","is_corresponding":false,"name":"Christoph Fischer"},{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"klenert@zib.de","is_corresponding":false,"name":"Nicolas Klenert"},{"affiliations":["Indian Institute of Science, Bangalore, India"],"email":"vijayn@iisc.ac.in","is_corresponding":false,"name":"Vijay Natarajan"},{"affiliations":["Freie Universit\u00e4t Berlin, Berlin, Germany"],"email":"george.pacey@fu-berlin.de","is_corresponding":false,"name":"George Pacey"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"thorwin.vogt@uni-hamburg.de","is_corresponding":false,"name":"Thorwin Vogt"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"marc.rautenhaus@uni-hamburg.de","is_corresponding":false,"name":"Marc Rautenhaus"},{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"baum@zib.de","is_corresponding":false,"name":"Daniel Baum"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1183","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1184","abstract":"To improve the perception of hierarchical structures in data sets, several color map generation algorithms have been proposed to take this structure into account. But the design of hierarchical color maps elicits different requirements to those of color maps for tabular data. Within this paper, we make an initial effort to put design rules from the color map literature into the context of hierarchical color maps. We investigate the impact of several design decisions and provide recommendations for various analysis scenarios. Thus, we lay the foundation for objective quality criteria to evaluate hierarchical color maps.","authors":[{"affiliations":["Fraunhofer IGD, Darmstadt, Germany"],"email":"tobias.mertz@igd.fraunhofer.de","is_corresponding":true,"name":"Tobias Mertz"},{"affiliations":["Fraunhofer IGD, Darmstadt, Germany","TU Darmstadt, Darmstadt, Germany"],"email":"joern.kohlhammer@igd.fraunhofer.de","is_corresponding":false,"name":"J\u00f6rn Kohlhammer"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1184","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Towards a Quality Approach to Hierarchical Color Maps","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1185","abstract":"The visualization and interactive exploration of geo-referenced networks poses challenges if the network's nodes are not evenly distributed. Our approach proposes new ways of realizing animated transitions for exploring such networks from an ego-perspective. We aim to reduce the required screen estate while maintaining the viewers' mental map of distances and directions. A preliminary study provides first insights of the comprehensiveness of animated geographic transitions regarding directional relationships between start and end point in different projections. Two use cases showcase how ego-perspective graph exploration can be supported using less screen space than previous approaches.","authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"max@mumintroll.org","is_corresponding":true,"name":"Max Franke"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"samuel.beck@vis.uni-stuttgart.de","is_corresponding":false,"name":"Samuel Beck"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"steffen.koch@vis.uni-stuttgart.de","is_corresponding":false,"name":"Steffen Koch"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1185","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1186","abstract":"Data visualizations help extract insights from datasets, but reaching these insights requires decomposing high level goals into low-level analytic tasks that can be complex due to varying degrees of data literacy and visualization experience. Recent advancements in large language models (LLMs) have shown promise for lowering barriers for users to achieve tasks such as writing code and may likewise facilitate visualization insight. Scalable Vector Graphics (SVG), a text-based image format common in data visualizations, matches well with the text sequence processing of transformer-based LLMs. In this paper, we explore the capability of LLMs to perform 10 low-level visual analytic tasks defined by Amar, Eagan, and Stasko directly on SVG-based visualizations. Using zero-shot prompts, we instruct the models to provide responses or modify the SVG code based on given visualizations. Our findings demonstrate that LLMs can effectively modify existing SVG visualizations for some tasks like Cluster but perform poorly on tasks requiring mathematical operations like Compute Derived Value. We also discovered that LLM performance can vary based on factors such as the number of data points, the presence of value labels, and the chart type. Our findings contribute to gauging the general capabilities of LLMs and highlight the need for further exploration and development to fully harness their potential in supporting visual analytic tasks.","authors":[{"affiliations":["Brown University, Providence, United States"],"email":"leooooxzz@gmail.com","is_corresponding":true,"name":"Zhongzheng Xu"},{"affiliations":["Emory University, Atlanta, United States"],"email":"emily.wall@emory.edu","is_corresponding":false,"name":"Emily Wall"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1186","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1188","abstract":"Vortices and their analysis play a critical role in the understanding of complex phenomena in turbulent flow. Traditional vortex extraction methods, notably region-based techniques, often overlook the entanglement phenomenon, resulting in the inclusion of multiple vortices within a single extracted region. Their separation is necessary for quantifying different types of vortices and their statistics. In this study, we propose a novel vortex separation method that extends the conventional contour tree-based segmentation approach with an additional step termed \u201clayering\u201d. Upon extracting a vortical region using specified vortex criteria (e.g., \u03bb2), we initially establish topological segmentation based on the contour tree, followed by the layering process to allocate appropriate segmentation IDs to unsegmented cells, thus separating individual vortices within the region. However, these regions may still suffer from inaccurate splits, which we address statistically by leveraging the continuity of vorticity lines across the split boundaries. Our findings demonstrate a significant improvement in both the separation of vortices and the mitigation of inaccurate splits compared to prior methods.","authors":[{"affiliations":["University of Houston, Houston, United States"],"email":"adeelz92@gmail.com","is_corresponding":true,"name":"Adeel Zafar"},{"affiliations":["University of Houston, Houston, United States"],"email":"zpoorsha@cougarnet.uh.edu","is_corresponding":false,"name":"Zahra Poorshayegh"},{"affiliations":["University of Houston, Houston, United States"],"email":"diyang@uh.edu","is_corresponding":false,"name":"Di Yang"},{"affiliations":["University of Houston, Houston, United States"],"email":"chengu@cs.uh.edu","is_corresponding":false,"name":"Guoning Chen"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1188","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Topological Separation of Vortices","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1189","abstract":"The information visualization research community commonly produces supporting software to demonstrate technical contributions to the field. However, developing this software tends to be an overwhelming task, the final product tends to be a research prototype without much thought for modularization and re-usability which makes it harder to replicate and adopt. This paper presents a design pattern for facilitating the creation, dissemination, and re-utilization of visualization techniques using reactive widgets. The design pattern features basic concepts that leverage modern front-end development best practices and standards, which ease development and replication. The paper presents several usage examples of the pattern, templates for implementation, and even a wrapper for facilitating the conversion of any Vega specification into a reactive widget.","authors":[{"affiliations":["Northeastern University, San Francisco, United States"],"email":"john.guerra@gmail.com","is_corresponding":true,"name":"John Alexis Guerra-Gomez"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1189","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1191","abstract":"To enable data-driven decision-making across organizations, data professionals need to share insights with their colleagues in context-appropriate communication channels. Many of their colleagues rely on data but are not themselves analysts; furthermore, their colleagues are reluctant or unable to use dedicated analytical applications or dashboards, and they expect communication to take place within threaded collaboration platforms such as Slack or Microsoft Teams. In this paper, we introduce a set of six strategies for adapting content from business intelligence (BI) dashboards into appropriate formats for sharing on collaboration platforms, formats that we refer to as dashboard snapshots. Informed by prior studies of enterprise communication around data, these strategies go beyond redesigning or restyling by considering varying levels of data literacy across an organization, introducing affordances for self-service question-answering, and anticipating the post-sharing lifecycle of data artifacts. These strategies involve the use of templates that are matched to common communicative intents, serving to reduce the workload of data professionals. We contribute a formal representation of these strategies and demonstrate their applicability in a comprehensive enterprise communication scenario featuring multiple stakeholders that unfolds over the span of months.","authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"hyeokkim2024@u.northwestern.edu","is_corresponding":true,"name":"Hyeok Kim"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"arjun.srinivasan.10@gmail.com","is_corresponding":false,"name":"Arjun Srinivasan"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"mbrehmer@uwaterloo.ca","is_corresponding":false,"name":"Matthew Brehmer"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1191","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1192","abstract":"Narrative visualization has become a crucial tool in data presentation, merging storytelling with data visualization to convey complex information in an engaging and accessible manner. In this study, we review the design space for narrative visualizations, focusing on animation style, through a comprehensive analysis of 71 papers from key visualization venues. We categorize these papers into six broad themes: Animation Style, Interactivity, Technology Usage, Methodology Development, Evaluation Type, and Application Domain. Our findings reveal a significant evolution in the field, marked by a growing preference for animated and non-interactive techniques. This trend reflects a shift towards minimizing user interaction while enhancing the clarity and impact of data presentation. We also identified key trends and technologies that have shaped the field, highlighting the role of technologies, such as machine learning in driving these changes. We offer insights into the dynamic interrelations within the narrative visualization domain, suggesting a future research trajectory that balances interactivity with automated tools to foster increased engagement. Our work lays the groundwork for future approaches for effective and innovative narrative visualization in diverse applications.","authors":[{"affiliations":["Louisiana State University, Baton Rouge, United States"],"email":"jyang44@lsu.edu","is_corresponding":true,"name":"Vyri Junhan Yang"},{"affiliations":["Louisiana State University, Baton Rouge, United States"],"email":"mjasim@lsu.edu","is_corresponding":false,"name":"Mahmood Jasim"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1192","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Animating the Narrative: A Review of Animation Styles in Narrative Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1193","abstract":"We present LinkQ, a system that leverages a large language model (LLM) to facilitate knowledge graph (KG) query construction through natural language question-answering. Traditional approaches often require detailed knowledge of complex graph querying languages, limiting the ability for users -- even experts -- to acquire valuable insights from KG data. LinkQ simplifies this process by first interpreting a user's question, then converting it into a well-formed KG query. By using the LLM to construct a query instead of directly answering the user's question, LinkQ guards against the LLM hallucinating or generating false, erroneous information. By integrating an LLM into LinkQ, users are able to conduct both exploratory and confirmatory data analysis, with the LLM helping to iteratively refine open-ended questions into precise ones. To demonstrate the efficacy of LinkQ, we conducted a qualitative study with five KG practitioners and distill their feedback. Our results indicate that practitioners find LinkQ effective for KG question-answering, and desire future LLM-assisted systems for the exploratory analysis of graph databases.","authors":[{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"harry.li@ll.mit.edu","is_corresponding":true,"name":"Harry Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"ashley.suh@ll.mit.edu","is_corresponding":false,"name":"Ashley Suh"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1193","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1199","abstract":"In the digital landscape, the ubiquity of data visualizations in media underscores the necessity for accessibility to ensure inclusivity for all users, including those with visual impairments. Current visual content often fails to cater to the needs of screen reader users due to the absence of comprehensive textual descriptions. To address this gap, we propose in this paper a framework designed to empower media content creators to transform charts into descriptive narratives. This tool not only facilitates the understanding of complex visual data through text but also fosters a broader awareness of accessibility in digital content creation. Through the application of this framework, users can interpret and convey the insights of data visualizations more effectively, accommodating a diverse audience. Our evaluations reveal that this tool not only enhances the comprehension of data visualizations but also promotes new perspectives on the represented data, thereby broadening the interpretative possibilities for all users.","authors":[{"affiliations":["Polytechnique Montr\u00e9al, Montr\u00e9al, Canada"],"email":"qiangxu1204@gmail.com","is_corresponding":true,"name":"Qiang Xu"},{"affiliations":["Polytechnique Montreal, Montreal, Canada"],"email":"thomas.hurtut@polymtl.ca","is_corresponding":false,"name":"Thomas Hurtut"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1199","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1207","abstract":"An essential task of an air traffic controller is to manage the traffic flow by predicting future trajectories. Complex traffic patterns are difficult to predict and manage and impose cognitive load on the air traffic controllers. In this work we present an interactive visual analytics interface which facilitates detection and resolution of complex traffic patterns for air traffic controllers. The interface supports the users in detecting complex clusters of aircraft and uses visual representations to communicate to the controllers how and propose re-routing. The interface further enables the ATCos to visualize and simultaneously compare how different re-routing strategies for each individual aircraft yield reduction of complexity in the entire sector for the next hour. The development of the concepts was supported by the domain-specific feedback we received from six fully licensed and operational air traffic controllers in an iterative design process over a period of 14 months.","authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"elmira.zohrevandi@liu.se","is_corresponding":true,"name":"Elmira Zohrevandi"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"katerina.vrotsou@liu.se","is_corresponding":false,"name":"Katerina Vrotsou"},{"affiliations":["Institute of Science and Technology, Norrk\u00f6ping, Sweden","Institute of Science and Technology, Norrk\u00f6ping, Sweden"],"email":"carl.westin@liu.se","is_corresponding":false,"name":"Carl A. L. Westin"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"jonas.lundberg@liu.se","is_corresponding":false,"name":"Jonas Lundberg"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"anders.ynnerman@liu.se","is_corresponding":false,"name":"Anders Ynnerman"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1207","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1211","abstract":"Transfer function design is crucial in volume rendering, as it directly influences the visual representation and interpretation of volumetric data. However, creating effective transfer functions that align with users\u2019 visual objectives is often challenging due to the complex parameter space and the semantic gap between transfer function values and features of interest within the volume. In this work, we propose a novel approach that leverages recent advancements in language-vision models to bridge this semantic gap. By employing a fully differentiable rendering pipeline and an image-based loss function guided by language descriptions, our method generates transfer functions that yield volume-rendered images closely matching the user\u2019s intent. We demonstrate the effectiveness of our approach in creating meaningful transfer functions from simple descriptions, empowering users to intuitively express their desired visual outcomes with minimal effort. This advancement streamlines the transfer function design process and makes volume rendering more accessible to a broader range of users.","authors":[{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"sangwon.jeong@vanderbilt.edu","is_corresponding":true,"name":"Sangwon Jeong"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["Lawrence Livermore National Laboratory , Livermore, United States"],"email":"shusenl@sci.utah.edu","is_corresponding":false,"name":"Shusen Liu"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"matthew.berger@vanderbilt.edu","is_corresponding":false,"name":"Matthew Berger"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1211","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Text-based transfer function design for semantic volume rendering","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1224","abstract":"Diffusion-based generative models\u2019 impressive ability to create convincing images has garnered global attention. However, their complex structures and operations often pose challenges for non-experts to grasp. We present Diffusion Explainer, the first interactive visualization tool that explains how Stable Diffusion transforms text prompts into images. Diffusion Explainer tightly integrates a visual overview of Stable Diffusion\u2019s complex structure with explanations of the underlying operations. By comparing image generation of prompt variants, users can discover the impact of keyword changes on image generation. A 56-participant user study demonstrates that Diffusion Explainer offers substantial learning benefits to non-experts. Our tool has been used by over 10,300 users from 124 countries at https://poloclub.github.io/diffusion-explainer/.","authors":[{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"seongmin@gatech.edu","is_corresponding":true,"name":"Seongmin Lee"},{"affiliations":["GA Tech, Atlanta, United States","IBM Research AI, Cambridge, United States"],"email":"benjamin.hoover@ibm.com","is_corresponding":false,"name":"Benjamin Hoover"},{"affiliations":["IBM Research AI, Cambridge, United States"],"email":"hendrik@strobelt.com","is_corresponding":false,"name":"Hendrik Strobelt"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"jayw@gatech.edu","is_corresponding":false,"name":"Zijie J. Wang"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"speng65@gatech.edu","is_corresponding":false,"name":"ShengYun Peng"},{"affiliations":["Georgia Institute of Technology , Atlanta , United States"],"email":"apwright@gatech.edu","is_corresponding":false,"name":"Austin P Wright"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"kevin.li@gatech.edu","is_corresponding":false,"name":"Kevin Li"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"haekyu@gatech.edu","is_corresponding":false,"name":"Haekyu Park"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"alexanderyang@gatech.edu","is_corresponding":false,"name":"Haoyang Yang"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"polo@gatech.edu","is_corresponding":false,"name":"Duen Horng (Polo) Chau"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1224","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1235","abstract":"A high number of samples often leads to occlusion in scatterplots, which hinders data perception and analysis. De-cluttering approaches based on spatial transformation reduce visual clutter by remapping samples using the entire available scatterplot domain. Such regularized scatterplots may still be used for data analysis tasks, if the spatial transformation is smooth and preserves the original neighborhood relations of samples. Recently, Rave et al. proposed an efficient regularization method based on integral images. We propose a generalization of their regularization scheme using sector-based transformations with the aim of increasing sample uniformity of the resulting scatterplot. We document the improvement of our approach using various uniformity measures.","authors":[{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"hennes.rave@uni-muenster.de","is_corresponding":true,"name":"Hennes Rave"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"molchano@uni-muenster.de","is_corresponding":false,"name":"Vladimir Molchanov"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"linsen@uni-muenster.de","is_corresponding":false,"name":"Lars Linsen"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1235","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Uniform Sample Distribution in Scatterplots via Sector-based Transformation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1236","abstract":"Automatically generating data visualizations in response to human utterances on datasets necessitates a deep semantic understanding of the data utterance, including implicit and explicit references to data attributes, visualization tasks, and necessary data preparation steps. Natural Language Interfaces (NLIs) for data visualization have explored ways to infer such information, yet challenges persist due to inherent uncertainty in human speech. Recent advances in Large Language Models (LLMs) provide an avenue to address these challenges, but their ability to extract the relevant semantic information remains unexplored. In this study, we evaluate four publicly available LLMs (GPT-4, Gemini-Pro, Llama3, and Mixtral), investigating their ability to comprehend utterances even in the presence of uncertainty and identify the relevant data context and visual tasks. Our findings reveal that LLMs are sensitive to uncertainties in utterance. Despite this sensitivity, they are able to extract the relevant data context. However, LLMs struggle with inferring visualization tasks. Based on these results, we highlight future research directions on using LLMs for visualization generation. Our supplementary materials have been shared on OSF: https://osf.io/j342a/wiki/home/?view_only=b4051ffc6253496d9bce818e4a89b9f9","authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"hbako@umd.edu","is_corresponding":true,"name":"Hannah K. Bako"},{"affiliations":["University of Maryland, College Park, United States"],"email":"arshnoorbhutani8@gmail.com","is_corresponding":false,"name":"Arshnoor Bhutani"},{"affiliations":["The University of Texas at Austin, Austin, United States"],"email":"xinyi.liu@utexas.edu","is_corresponding":false,"name":"Xinyi Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"kcobbina@cs.umd.edu","is_corresponding":false,"name":"Kwesi Adu Cobbina"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1236","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1248","abstract":"Statistical practices such as building regression models or running hypothesis tests rely on following rigorous procedures of steps and verifying assumptions on data to produce valid results. However, common statistical tools do not verify users\u2019 decision choices and provide low-level statistical functions without instructions on the whole analysis practice. Users can easily misuse analysis methods, potentially decreasing the validity of results. To address this problem, we introduce GuidedStats, an interactive interface within computational notebooks that encapsulates guidance, models, visualization, and exportable results into interactive workflows. It breaks down typical analysis processes, such as linear regression and two-sample T-tests, into interactive steps supplemented with automatic visualizations and explanations for step-wise evaluation. Users can iterate on input choices to refine their models, while recommended actions and exports allow the user to continue their analysis in code. Case studies show how GuidedStats offers valuable instructions for conducting fluid statistical analyses while finding possible assumption violations in the underlying data, supporting flexible and accurate statistical analyses.","authors":[{"affiliations":["New York University, New York, United States"],"email":"yz9381@nyu.edu","is_corresponding":true,"name":"Yuqi Zhang"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"adamperer@cmu.edu","is_corresponding":false,"name":"Adam Perer"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"willepp@cmu.edu","is_corresponding":false,"name":"Will Epperson"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1248","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Guided Statistical Workflows with Interactive Explanations and Assumption Checking","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1264","abstract":"The Local Moran's I statistic is a valuable tool for identifying localized patterns of spatial autocorrelation. Understanding these patterns is crucial in spatial analysis, but interpreting the statistic can be difficult. To simplify this process, we introduce three novel visualizations that enhance the interpretation of Local Moran's I results. These visualizations can be interactively linked to one another, and to established visualizations, to offer a more holistic exploration of the results. We provide a JavaScript library with implementations of these new visual elements, along with a web dashboard that demonstrates their integrated use.","authors":[{"affiliations":["NIH, Rockville, United States","Queen's University, Belfast, United Kingdom"],"email":"masonlk@nih.gov","is_corresponding":true,"name":"Lee Mason"},{"affiliations":["Queen's University Belfast , Belfast , United Kingdom"],"email":"b.hicks@qub.ac.uk","is_corresponding":false,"name":"Bl\u00e1naid Hicks"},{"affiliations":["National Institutes of Health, Rockville, United States"],"email":"jonas.dealmeida@nih.gov","is_corresponding":false,"name":"Jonas S Almeida"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1264","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1274","abstract":"This study examines the impact of positive and negative contrast polarities (i.e., light and dark modes) on the performance of younger adults and people in their late adulthood (PLA). In a crowdsourced study with 134 participants (69 below age 60, 66 aged 60 and above), we assessed their accuracy and time performing analysis tasks across three common visualization types (Bar, Line, Scatterplot) and two contrast polarities (positive and negative). We observed that, across both age groups, the polarity that led to better performance and the resulting amount of improvement varied on an individual basis, with each polarity benefiting comparable proportions of participants. Additionally, we observed that the choice of contrast polarity can have an impact on time similar to that of the choice of visualization type, resulting in an average percent difference of around 36%. These findings indicate that, overall, the effects of contrast polarity on visual analysis performance do not noticeably change with age. Furthermore, they underscore the importance of making visualizations available in both contrast polarities to better-support a broad audience with differing needs.","authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"zwhile@cs.umass.edu","is_corresponding":true,"name":"Zack While"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"asarv@cs.umass.edu","is_corresponding":false,"name":"Ali Sarvghad"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1274","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1276","abstract":"Machine Learning models for chart-grounded Q&A (CQA) often treat charts as images, but performing CQA on pixel values has proven challenging. We thus investigate a resource overlooked by current ML-based approaches: the declarative documents describing how charts should visually encode data (i.e., chart specifications). In this work, we use chart specifications to enhance language models (LMs) for chart-reading tasks, such that the resulting system can robustly understand language for CQA. Through a case study with 359 bar charts, we test novel fine tuning schemes on both GPT-3 and T5 using a new dataset curated for two CQA tasks: question-answering and visual explanation generation. Our text-only approaches strongly outperform vision-based GPT-4 on explanation generation (99% vs. 63% accuracy), and show promising results for question-answering (57-67% accuracy). Through in-depth experiments, we also show that our text-only approaches are mostly robust to natural language variation.","authors":[{"affiliations":["Adobe Research, San Jose, United States"],"email":"victorbursztyn2022@u.northwestern.edu","is_corresponding":true,"name":"Victor S. Bursztyn"},{"affiliations":["Adobe Research, Seattle, United States"],"email":"jhoffs@adobe.com","is_corresponding":false,"name":"Jane Hoffswell"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"sguo@adobe.com","is_corresponding":false,"name":"Shunan Guo"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"eunyee@adobe.com","is_corresponding":false,"name":"Eunyee Koh"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1276","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1277","abstract":"Trust is a subjective yet fundamental component of human-computer interaction, and is a determining factor in shaping the efficacy of data visualizations. Prior research has identified five dimensions of trust assessment in visualizations (credibility, clarity, reliability, familiarity, and confidence), and observed that these dimensions tend to vary predictably along with certain features of the visualization being evaluated. This raises a further question: how do the design features driving viewers' trust assessment vary with the characteristics of the viewers themselves? By reanalyzing data from these studies through the lens of individual differences, we build a more detailed map of the relationships between design features, individual characteristics, and trust behaviors. In particular, we model the distinct contributions of endogenous design features (such as visualization type, or the use of color) and exogenous user characteristics (such as visualization literacy), as well as the interactions between them. We then use these findings to make recommendations for individualized and adaptive visualization design.","authors":[{"affiliations":["Smith College, Northampton, United States"],"email":"jcrouser@smith.edu","is_corresponding":true,"name":"R. Jordan Crouser"},{"affiliations":["Smith College, Northampton, United States"],"email":"cmatoussi@smith.edu","is_corresponding":false,"name":"Syrine Matoussi"},{"affiliations":["Smith College, Northampton, United States"],"email":"ekung@smith.edu","is_corresponding":false,"name":"Lan Kung"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"p.saugat@wustl.edu","is_corresponding":false,"name":"Saugat Pandey"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"m.oen@wustl.edu","is_corresponding":false,"name":"Oen G McKinley"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"alvitta@wustl.edu","is_corresponding":false,"name":"Alvitta Ottley"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1277","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1285","abstract":"This study examines the impact of social-comparison risk visualizations on public health communication, comparing the effects of traditional bar charts against alternative jitter plots emphasizing geographic variability (geo jitter). The research highlights that whereas both visualization types increased perceived vulnerability, behavioral intent, and policy support, the geo jitter plots were significantly more effective in reducing unjustified personal attributions. Importantly, the findings also underscore the emotional challenges faced by visualization viewers from marginalized communities, indicating a need for designs that are sensitive to the potential for reinforcing stereotypes or eliciting negative emotions. This work suggests a strategic reevaluation of visual communication tools in public health to enhance understanding and engagement without contributing to negative attributions or emotional distress.","authors":[{"affiliations":["3iap, Raleigh, United States"],"email":"eli@3iap.com","is_corresponding":false,"name":"Eli Holder"},{"affiliations":["Northeastern University, Boston, United States","University of California Merced, Merced, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":true,"name":"Lace M. Padilla"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1285","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"\"Must Be a Tuesday\": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1292","abstract":"Collaborative planning for congenital heart diseases typically involves creating physical heart models through 3D printing, which are then examined by both surgeons and cardiologists. Recent developments in mobile augmented reality (AR) technologies have presented a viable alternative, known for their ease of use and portability. However, there is still a lack of research examining the utilization of multi-user mobile AR environments to support collaborative planning for cardiovascular surgeries. We created ARCollab, an iOS AR app designed for enabling multiple surgeons and cardiologists to interact with a patient's 3D heart model in a shared environment. ARCollab enables surgeons and cardiologists to import heart models, manipulate them through gestures and collaborate with other users, eliminating the need for fabricating physical heart models. Our evaluation of ARCollab's usability and usefulness in enhancing collaboration, conducted with three cardiothoracic surgeons and two cardiologists, marks the first human evaluation of a multi-user mobile AR tool for surgical planning. ARCollab is open-source, available at https://github.com/poloclub/arcollab.","authors":[{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"pratham.mehta001@gmail.com","is_corresponding":true,"name":"Pratham Darrpan Mehta"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"rnarayanan39@gatech.edu","is_corresponding":false,"name":"Rahul Ozhur Narayanan"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"harsha5431@gmail.com","is_corresponding":false,"name":"Harsha Karanth"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"alexanderyang@gatech.edu","is_corresponding":false,"name":"Haoyang Yang"},{"affiliations":["Emory University, Atlanta, United States"],"email":"slesnickt@kidsheart.com","is_corresponding":false,"name":"Timothy C Slesnick"},{"affiliations":["Emory University/Children's Healthcare of Atlanta, Atlanta, United States"],"email":"fawwaz.shaw@choa.org","is_corresponding":false,"name":"Fawwaz Shaw"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"polo@gatech.edu","is_corresponding":false,"name":"Duen Horng (Polo) Chau"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1292","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1301","abstract":"Reactionary delay'' is a result of the accumulated cascading effects of knock-on train delays. It is becoming an increasing problem as shared railway infrastructure becomes more crowded. The chaotic nature of its effects is notoriously hard to predict. We use a stochastic Monte-Carto-style simulation of reactionary delay that produces whole distributions of likely reactionary delay. Our contribution is the demonstrating how Zoomable GlyphTables -- case-by-variable tables in which cases are rows, variables are columns, variables are complex composite metrics that incorporate distributions, and cells contain mini-charts that depict these as different level of detail through zoom interaction -- help interpret these results for helping understanding the causes and effects of reactionary delay and how they have been informing timetable robustness testing and tweaking. We describe our design principles, demonstrate how this supported our analytical tasks and we reflect on wider potential for Zoomable GlyphTables to be used more widely.","authors":[{"affiliations":["City, University of London, London, United Kingdom"],"email":"a.slingsby@city.ac.uk","is_corresponding":true,"name":"Aidan Slingsby"},{"affiliations":["Risk Solutions, Warrington, United Kingdom"],"email":"jonathan.hyde@risksol.co.uk","is_corresponding":false,"name":"Jonathan Hyde"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1301","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Zoomable Glyph Tables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1026","abstract":"We present a visual analytics approach for multi-level visual exploration of users\u2019 interaction strategies in an interactive digital environment. The use of interactive touchscreen exhibits in informal learning environments, such as museums and science centers, often incorporate frameworks that classify learning processes, such as Bloom\u2019s taxonomy, to achieve better user engagement and knowledge transfer. To analyze user behavior within these digital environments, interaction logs are recorded to capture diverse exploration strategies. However, analysis of such logs is challenging, especially in terms of coupling interactions and cognitive learning processes, and existing work within learning and educational contexts remains limited. To address these gaps, we develop a visual analytics approach for analyzing interaction logs that supports exploration at the individual user level and multi-user comparison. The approach utilizes algorithmic methods to identify similarities in users' interactions and reveal their exploration strategies. We motivate and illustrate our approach through an application scenario, using event sequences derived from interaction log data in an experimental study conducted with science center visitors from diverse backgrounds and demographics. The study involves 14 users completing tasks of increasing complexity, designed to stimulate different levels of cognitive learning processes. We implement our approach in an interactive visual analytics prototype system, named VISID, and together with domain experts, discover a set of task-solving exploration strategies, such as \"cascading\" and \"nested-loop\", which reflect different levels of learning processes from Bloom's taxonomy. Finally, we discuss the generalizability and scalability of the presented system and the need for further research with data acquired in the wild.","authors":[{"affiliations":["Media and Information Technology, Norrk\u00f6ping, Sweden"],"email":"peilin.yu@liu.se","is_corresponding":true,"name":"Peilin Yu"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"aida.vitoria@liu.se","is_corresponding":false,"name":"Aida Nordman"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"marta.koc-januchta@liu.se","is_corresponding":false,"name":"Marta M. Koc-Januchta"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":false,"name":"Lonni Besan\u00e7on"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"katerina.vrotsou@liu.se","is_corresponding":false,"name":"Katerina Vrotsou"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1026","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1031","abstract":"In soccer, player scouting aims to find players suitable for a team to increase the winning chance in future matches. To scout suitable players, coaches and analysts need to consider various complicated factors, such as the players' performance in the tactics of a new team, which is hard to learn directly from their historical performance. Match simulation methods have been introduced to scout players by estimating their expected contributions to a new team. However, they usually focus on the simulation of match results and hardly support interactive analysis to navigate potential target players and compare them in fine-grained simulated behaviors. In this work, we propose a visual analytics method to assist soccer player scouting based on match simulation. We construct a two-level match simulation framework for estimating both match results and player behaviors when a player comes to a new team. Based on the framework, we develop a visual analytics system, Team-Scouter, to facilitate the simulative-based soccer player scouting process through player navigation, comparison, and explanation. With our system, coaches and analysts can find potential players suitable for the team and compare them on historical and expected performances. To explain the players' expected performances, the system provides a visual comparison between the simulated behaviors of the player and the actual ones. The usefulness and effectiveness of the system are demonstrated by two case studies on a real-world dataset and an expert interview.","authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"caoanqi28@163.com","is_corresponding":true,"name":"Anqi Cao"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"xxie@zju.edu.cn","is_corresponding":false,"name":"Xiao Xie"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"2366385033@qq.com","is_corresponding":false,"name":"Runjin Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"1282533692@qq.com","is_corresponding":false,"name":"Yuxin Tian"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"fanmu_032@zju.edu.cn","is_corresponding":false,"name":"Mu Fan"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhang_hui@zju.edu.cn","is_corresponding":false,"name":"Hui Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1031","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1032","abstract":"Dynamic topic modeling is useful at discovering the development and change in latent topics over time. However, present methodology relies on algorithms that separate document and word representations. This prevents the creation of a meaningful embedding space where changes in word usage and documents can be directly analyzed in a temporal context. This paper proposes an expansion of the compass-aligned temporal Word2Vec methodology into dynamic topic modeling. Such a method allows for the direct comparison of word and document embeddings across time in dynamic topics. This enables the creation of visualizations that incorporate diachronic word embeddings within the context of documents into topic visualizations. In experiments against the current state-of-the-art, our proposed method demonstrates overall competitive performance in topic relevancy and diversity across temporal datasets of varying size. Simultaneously, it provides insightful visualizations focused on temporal word embeddings while maintaining the insights provided by global topic evolution, advancing our understanding of how topics evolve over time.","authors":[{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"d4n1elp@vt.edu","is_corresponding":true,"name":"Daniel Palamarchuk"},{"affiliations":["Virginia Polytechnic Institute of Technology , Blacksburg, United States"],"email":"lemaraw@vt.edu","is_corresponding":false,"name":"Lemara Williams"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"bmayer@cs.vt.edu","is_corresponding":false,"name":"Brian Mayer"},{"affiliations":["Savannah River National Laboratory, Aiken, United States"],"email":"thomas.danielson@srnl.doe.gov","is_corresponding":false,"name":"Thomas Danielson"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":false,"name":"Rebecca Faust"},{"affiliations":["Savannah River National Laboratory, Aiken, United States"],"email":"larry.deschaine@srnl.doe.gov","is_corresponding":false,"name":"Larry M Deschaine PhD"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1032","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Visualizing Temporal Topic Embeddings with a Compass","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1039","abstract":"Propagation analysis refers to studying how information spreads on social media, a pivotal endeavor for understanding social sentiment and public opinions. Numerous studies contribute to visualizing information spread, but few have considered the implicit and complex diffusion patterns among multiple platforms. To bridge the gap, we collaborated with professionals to discover crucial factors that dissect the mechanism of cross-platform information spread. Based on that, we propose an information diffusion model that estimates the likelihood of a topic/post spreading among different social media platforms. Moreover, we propose a novel visual metaphor that encapsulates cross-platform patterns in a manner analogous to the spread of seeds across gardens. Specifically, we visualize social platforms, posts, implicit cross-platform routes, and salient instances as elements of a virtual ecosystem \u2014 gardens, flowers, winds, and seeds, respectively. We further develop a visual analytic system, namely BloomWind, that enables users to quickly identify the cross-platform diffusion patterns and investigate the relevant social media posts. Ultimately, we demonstrate the usage of BloomWind through two case studies and validate its effectiveness using expert interviews.","authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"940662579@qq.com","is_corresponding":true,"name":"Jianing Yin"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"hzjia@zju.edu.cn","is_corresponding":false,"name":"Hanze Jia"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhoubuwei@zju.edu.cn","is_corresponding":false,"name":"Buwei Zhou"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"tangtan@zju.edu.cn","is_corresponding":false,"name":"Tan Tang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"yingluu@zju.edu.cn","is_corresponding":false,"name":"Lu Ying"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"sn_ye@zju.edu.cn","is_corresponding":false,"name":"Shuainan Ye"},{"affiliations":["Michigan State University, East Lansing, United States"],"email":"pengtaiq@msu.edu","is_corresponding":false,"name":"Tai-Quan Peng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1039","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1059","abstract":"When treating Head and Neck cancer patients, oncologists have to navigate a complicated series of treatment decisions for each patient. The relationship between each treatment decision and the potential tradeoff of tumor control and toxicity risk is poorly understood, leaving oncologists to largely rely on institutional knowledge and general guidelines that do not take into account specific patient circumstances. Evaluating these risks relies on a complicated understanding of several different factors such as patient health, spatial tumor spread and treatment side effect risk that can not be captured through simple heuristics. To support clinicians in better understanding tradeoffs when deciding on treatment courses, we developed DITTO, a digital-twin and visual computing system that allows clinicians to analyze nuanced patient risk for each patient and decide on an optimal treatment plan. DITTO relies on a sequential Deep Reinforcement Learning (DRL) system to deliver personalized risk of both long-term and short-term disease outcome and toxicity risk for HNC patients. Based on a participatory collaborative design alongside oncologists, we also implement several explainability methods to support clinical trust and encourage healthy skepticism when using our models. We evaluate the efficacy of our model through quantitative evaluation of model performance and case studies with qualitative feedback. Finally, we discuss design lessons for developing clinical visual XAI applications for clinical end users.","authors":[{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"awentze2@uic.edu","is_corresponding":true,"name":"Andrew Wentzel"},{"affiliations":["University of Houston, Houston, United States"],"email":"skattia@mdanderson.org","is_corresponding":false,"name":"Serageldin Attia"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"zhangz@uic.edu","is_corresponding":false,"name":"Xinhua Zhang"},{"affiliations":["University of Iowa, Iowa City, United States"],"email":"guadalupe-canahuate@uiowa.edu","is_corresponding":false,"name":"Guadalupe Canahuate"},{"affiliations":["University of Texas, Houston, United States"],"email":"cdfuller@mdanderson.org","is_corresponding":false,"name":"Clifton David Fuller"},{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"g.elisabeta.marai@gmail.com","is_corresponding":false,"name":"G. Elisabeta Marai"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1059","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1060","abstract":"There is increased interest in understanding the interplay between text and visuals in the field of data visualization. However, this attention has predominantly been on the use of text in standalone visualizations (such as text annotation overlays) or augmenting text stories supported by a series of independent views. In this paper, we shift from the traditional focus on single-chart annotations to characterize the nuanced but crucial communication role of text in the complex environment of interactive dashboards. Through a survey and analysis of 190 dashboards in the wild, plus 13 expert interview sessions with experienced dashboard authors, we highlight the distinctive nature of text as an integral component of the dashboard experience, while delving into the categories, semantic levels, and functional roles of text, and exploring how these text elements are coalesced by dashboard authors to guide and inform dashboard users. Our contributions are threefold. First, we distill qualitative and quantitative findings from our studies to characterize current practices of text use in dashboards, including a categorization of text-based components and design patterns. Second, we leverage current practices and existing literature to propose, discuss, and validate recommended practices for text in dashboards, embodied as a set of 12 heuristics that underscore the semantic and functional role of text in offering navigational cues, contextualizing data insights, supporting reading order, among other concerns. Third, we reflect on our findings plus existing literature to identify gaps and propose opportunities for data visualization researchers to push the boundaries on text usage for dashboards, from authoring support and interactivity to text generation and content personalization. Our research underscores the significance of elevating text as a first-class citizen in data visualization, and the need to support the inclusion of textual components and their interactive affordances in dashboard design.","authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"nicole.sultanum@gmail.com","is_corresponding":true,"name":"Nicole Sultanum"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1060","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1063","abstract":"While previous work has found success in deploying visualizations as museum exhibits, it has not investigated whether museum context impacts visitor behaviour with these exhibits. We present an interactive Deep-time Literacy Visualization Exhibit (DeLVE) to help museum visitors understand deep time (lengths of extremely long geological processes) by improving proportional reasoning skills through comparison of different time periods. DeLVE uses a new visualization idiom, Connected Multi-Tier Ranges, to visualize curated datasets of past events across multiple scales of time, relating extreme scales with concrete scales that have more familiar magnitudes and units. Museum staff at three separate museums approved the deployment of DeLVE as a digital kiosk, and devoted time to curating a unique dataset in each of them. We collect data from two sources, an observational study and system trace logs. We discuss the importance of context: similar museum exhibits in different contexts were received very differently by visitors. We additionally discuss differences in our process from Sedlmair et al.'s design study methodology which is focused on design studies triggered by connection with collaborators rather than the discovery of a concept to communicate. Supplemental materials are available at: https://osf.io/z53dq/?view_only=4df33aad207144aca149982412125541","authors":[{"affiliations":["The University of British Columbia, Vancouver, Canada"],"email":"marasolen@gmail.com","is_corresponding":true,"name":"Mara Solen"},{"affiliations":["University of British Columbia , Vancouver, Canada"],"email":"sultananigar70@gmail.com","is_corresponding":false,"name":"Nigar Sultana"},{"affiliations":["University of British Columbia, Vancouver, Canada"],"email":"laura.lukes@ubc.ca","is_corresponding":false,"name":"Laura A. Lukes"},{"affiliations":["University of British Columbia, Vancouver, Canada"],"email":"tmm@cs.ubc.ca","is_corresponding":false,"name":"Tamara Munzner"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1063","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DeLVE into Earth\u2019s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1067","abstract":"Large Language Models (LLMs), such as ChatGPT and Llama, have revolutionized various domains through their impressive natural language processing capabilities. However, their deployment raises significant ethical and security concerns, including their potential misuse for generating fake news or aiding illegal activities. Thus, ensuring the development of secure and trustworthy LLMs is crucial. Traditional red teaming approaches for identifying vulnerabilities in AI models are limited by their reliance on manual prompt construction and expertise. This paper introduces a novel visual analytics system, AdversaFlow, designed to enhance the security of LLMs against adversarial attacks through human-AI collaboration. Our system, which involves adversarial training between a target model and a red model, is equipped with a unique multi-level adversarial flow visualization and a fluctuation path visualization technique. These features provide a detailed insight into the adversarial dynamics and the robustness of LLMs, thereby enabling AI security experts to identify and mitigate vulnerabilities effectively. We deliver quantitative evaluations for the models and present case studies that validate the utility of our system and share insights for future AI security solutions. Our contributions include a human-AI collaboration framework for LLM red teaming, a comprehensive visual analytics system to support adversarial pattern presentation and fluctuation analysis, and valuable lessons learned in visual analytics for AI security.","authors":[{"affiliations":["Zhejiang University, Ningbo, China"],"email":"dengdazhen@outlook.com","is_corresponding":true,"name":"Dazhen Deng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhangchuhan024@163.com","is_corresponding":false,"name":"Chuhan Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"huawzheng@gmail.com","is_corresponding":false,"name":"Huawei Zheng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"yw.pu@zju.edu.cn","is_corresponding":false,"name":"Yuwen Pu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"sji@zju.edu.cn","is_corresponding":false,"name":"Shouling Ji"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1067","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1077","abstract":"A growing body of work draws on feminist thinking to challenge assumptions about how people engage with and use visualizations. This work draws on feminist values, driving design and research guidelines that account for the influences of power and neglect. This prior work is largely prescriptive, however, forgoing articulation of how feminist theories of knowledge \u2014 or feminist epistemology \u2014 can alter research design and outcomes. At the core of our work is an engagement with feminist epistemology, drawing attention to how a new framework for how we know what we know enabled us to overcome intellectual tensions in our research. Specifically, we focus on the theoretical concept of entanglement, central to recent feminist scholarship, and contribute: a history of entanglement in the broader scope of feminist theory; an articulation of the main points of entanglement theory for a visualization context; and a case study of research outcomes as evidence of the potential of feminist epistemology to impact visualization research. This work answers a call in the community to embrace a broader set of theoretical and epistemic foundations and provides a starting point for bringing different theories into visualization research.","authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"derya.akbaba@liu.se","is_corresponding":true,"name":"Derya Akbaba"},{"affiliations":["Emory University, Atlanta, United States"],"email":"lauren.klein@emory.edu","is_corresponding":false,"name":"Lauren Klein"},{"affiliations":["Link\u00f6ping University, N\u00f6rrkoping, Sweden"],"email":"miriah.meyer@liu.se","is_corresponding":false,"name":"Miriah Meyer"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1077","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Entanglements for Visualization: Changing Research Outcomes through Feminist Theory","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1096","abstract":"Large Language Models (LLMs) have shown great potential in intelligent visualization systems, especially for domain-specific applications. Integrating LLMs into visualization systems presents challenges, and we categorize these challenges into three alignments: domain problems with LLMs, visualization with LLMs, and interaction with LLMs. To achieve these alignments, we propose a framework and outline a workflow to guide the application of fine-tuned LLMs to enhance visual interactions for domain-specific tasks. These alignment challenges are critical in education as they call for an intelligent visualization system to support beginners' self-regulated learning. Therefore, we apply the framework to education and introduce Tailor-Mind, an interactive visualization system designed to facilitate self-regulated learning for artificial intelligence beginners. Drawing on insights from a preliminary study, we identify self-regulated learning tasks and fine-tuning objectives to guide visualization design and tuning data construction. Our focus on aligning visualization with fine-tuned LLM makes Tailor-Mind more like a personalized tutor. Tailor-Mind also supports interactive recommendations to help beginners better achieve their learning goals. Model performance evaluations and user studies confirm that Tailor-Mind improves the self-regulated learning experience, effectively validating the proposed framework.","authors":[{"affiliations":["Fudan University, Shanghai, China"],"email":"lgao.lynne@gmail.com","is_corresponding":true,"name":"Lin Gao"},{"affiliations":["Fudan University, ShangHai, China"],"email":"kingluther6666@gmail.com","is_corresponding":false,"name":"Jing Lu"},{"affiliations":["Fudan University, Shanghai, China"],"email":"gemini25szk@gmail.com","is_corresponding":false,"name":"Zekai Shao"},{"affiliations":["Fudan University, Shanghai, China"],"email":"ziyuelin917@gmail.com","is_corresponding":false,"name":"Ziyue Lin"},{"affiliations":["Fudan unversity, ShangHai, China"],"email":"sbyue23@m.fudan.edu.cn","is_corresponding":false,"name":"Shengbin Yue"},{"affiliations":["Fudan University, Shanghai, China"],"email":"chiokit0819@gmail.com","is_corresponding":false,"name":"Chiokit Ieong"},{"affiliations":["Fudan University, Shanghai, China"],"email":"21307130094@m.fudan.edu.cn","is_corresponding":false,"name":"Yi Sun"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"rory.james.zauner@univie.ac.at","is_corresponding":false,"name":"Rory Zauner"},{"affiliations":["Fudan University, Shanghai, China"],"email":"zywei@fudan.edu.cn","is_corresponding":false,"name":"Zhongyu Wei"},{"affiliations":["Fudan University, Shanghai, China"],"email":"simingchen3@gmail.com","is_corresponding":false,"name":"Siming Chen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1096","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1099","abstract":"Tactics play an important role in team sports by guiding how players interact on the field. Both sports fans and experts have a demand for analyzing sports tactics. Existing approaches allow users to visually perceive the multivariate tactical effects. However, these approaches usually consider each tactic as a whole, making it difficult for users to connect the complex interactions inside each tactic to the final tactical effect. In this work, we collaborate with basketball experts and propose a progressive approach to help users gain a deeper understanding of how each tactic works and customize tactics on demand. Users can progressively sketch on a tactic board, and a coach agent will simulate the possible actions in each step and present the simulation to users with facet visualizations. We develop an extensible framework that integrates large language models (LLMs) and visualizations to help users communicate with the coach agent with multimodal inputs. Based on the framework, we design and develop Smartboard, an agent-based interactive visualization system for fine-grained tactical analysis. Smartboard provides users with a structured process of setup, simulation, and evolution, allowing for iterative exploration of tactics based on specific personalized scenarios. We conduct case studies based on real-world basketball datasets to demonstrate the usefulness of our system.","authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ziao_liu@outlook.com","is_corresponding":true,"name":"Ziao Liu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"xxie@zju.edu.cn","is_corresponding":false,"name":"Xiao Xie"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"3170101799@zju.edu.cn","is_corresponding":false,"name":"Moqi He"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhao_ws@zju.edu.cn","is_corresponding":false,"name":"Wenshuo Zhao"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"wuyihong0606@gmail.com","is_corresponding":false,"name":"Yihong Wu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"lycheecheng@zju.edu.cn","is_corresponding":false,"name":"Liqi Cheng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhang_hui@zju.edu.cn","is_corresponding":false,"name":"Hui Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1099","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Smartboard: Visual Exploration of Team Tactics with LLM Agent","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1100","abstract":"\u201cCorrelation does not imply causation\u201d is a famous mantra in statistical and visual analysis. However, consumers of visualizations often draw causal conclusions when only correlations between variables are shown. In this paper, we investigate factors that contribute to causal relationships users perceive in visualizations. We collected a corpus of concept pairs from variables in widely used datasets and created visualizations that depict varying correlative associations using three typical statistical chart types. We conducted two MTurk studies on (1) preconceived notions on causal relations without charts, and (2) perceived causal relations with charts, for each concept pair. Our results indicate that people make assumptions about causal relationships between pairs of concepts even without seeing any visualized data. Moreover, our results suggest that these assumptions constitute causal priors that, in combination with chart type and visualized association, impact how data visualizations are interpreted. The results also suggest that causal priors may lead to over- or under-estimation in perceived causal relations in different circumstances, and that those priors can also impact users\u2019 confidence in their causal assessments. Using data from the studies, we develop a model to capture the interaction between causal priors and visualized associations as they combine to impact a user\u2019s perceived causal relations. In addition to reporting the study results and analyses, we provide an open dataset of causal priors for 56 specific concept pairs that can serve as a potential benchmark for future studies. We also suggest heuristic-based guidelines to help designers improve visualization design choices to better support visual causal inference.","authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":true,"name":"Arran Zeyu Wang"},{"affiliations":["UNC-Chapel Hill, Chapel Hill, United States"],"email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":["Davidson College, Davidson, United States"],"email":"tapeck@davidson.edu","is_corresponding":false,"name":"Tabitha C. Peck"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"vaapad@live.unc.edu","is_corresponding":false,"name":"Wenyuan Wang"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1100","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Causal Priors and Their Influence on Judgements of Causality in Visualized Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1121","abstract":"Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.","authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jykim@hcil.snu.ac.kr","is_corresponding":true,"name":"Jaeyoung Kim"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"sihyeon@hcil.snu.ac.kr","is_corresponding":false,"name":"Sihyeon Lee"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"hj@hcil.snu.ac.kr","is_corresponding":false,"name":"Hyeon Jeon"},{"affiliations":["Korea University Guro Hospital, Seoul, Korea, Republic of"],"email":"gooday19@gmail.com","is_corresponding":false,"name":"Keon-Joo Lee"},{"affiliations":["Hankuk University of Foreign Studies, Yongin-si, Korea, Republic of"],"email":"bkim@hufs.ac.kr","is_corresponding":false,"name":"Bohyoung Kim"},{"affiliations":["Seoul National University Bundang Hospital, Seongnam, Korea, Republic of"],"email":"braindoc@snu.ac.kr","is_corresponding":false,"name":"HEE JOON"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1121","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1128","abstract":"Citations allow quickly identifying related research. If multiple publications are selected as seeds, specific suggestions for related literature can be made based on the number of incoming and outgoing citation links to this selection. Interactively adding recommended publications to the selection refines the next suggestion and incrementally builds a relevant collection of publications. Following this approach, the paper presents a search and foraging approach, PUREsuggest, which combines citation-based suggestions with augmented visualizations of the citation network. The focus and novelty of the approach is, first, the transparency of how the rankings are explained visually and, second, that the process can be steered through user-defined keywords, which reflect topics of interests. The system can be used to build new literature collections, to update and assess existing ones, as well as to use the collected literature for identifying relevant experts in the field. We evaluated the recommendation approach through simulated sessions and performed a user study investigating search strategies and usage patterns supported by the interface.","authors":[{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"fabian.beck@uni-bamberg.de","is_corresponding":true,"name":"Fabian Beck"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1128","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1137","abstract":"Inspired by recent advances in digital fabrication, artists and scientists have demonstrated that physical data encodings (i.e., data physicalizations) can increase engagement with data, foster collaboration, and in some cases, improve data legibility and analysis relative to digital alternatives. However, prior empirical studies have only investigated abstract data encoded in physical form (e.g., laser cut bar charts) and not continuously sampled spatial data fields relevant to climate and medical science (e.g., heights, temperatures, densities, and velocities sampled on a spatial grid). This paper presents the design and results of the first study to characterize human performance in 3D spatial data analysis tasks across analogous physical and digital visualizations. Participants analyzed continuous spatial elevation data with three visualization modalities: (1) 2D digital visualization; (2) perspective-tracked, stereoscopic \"fishtank\" virtual reality; and (3) 3D printed data physicalization. Their tasks included tracing paths downhill, looking up spatial locations and comparing their relative heights, and identifying and reporting the minimum and maximum heights within certain spatial regions. As hypothesized, in most cases, participants performed the tasks just as well or better in the physical modality (based on time and error metrics). Additional results include an analysis of open-ended feedback from participants and discussion of implications for further research on the value of data physicalization. All data and supplemental materials are available at https://osf.io/7xdq4/?view_only=7416f8cfca85473889456fb69527abbc","authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"bridger.g.herman@gmail.com","is_corresponding":true,"name":"Bridger Herman"},{"affiliations":["Beth Israel Deaconess Medical Center, Boston, United States"],"email":"cdjackso@bidmc.harvard.edu","is_corresponding":false,"name":"Cullen D. Jackson"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"dfk@umn.edu","is_corresponding":false,"name":"Daniel F. Keefe"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1137","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1140","abstract":"Written language is a useful mode for non-visual creative activities like writing essays and planning searches. This paper investigates the integration of written language into the visualization design process. We call this idea a `written rudder,' , since it acts as a guiding force or strategy for the design. Via an interview study of 24 working visualization designers, we first established that only a minority of participants systematically use written rudders to aid in design. A second study with 15 visualization designers examined four different variants of rudders: asking questions, stating conclusions, composing a narrative, and writing titles. Overall, participants had a positive reaction; designers recognized the benefits of explicitly writing down components of the design and indicated that they would use this approach in future design work. More specifically, two approaches \u2013- writing questions and writing conclusions/takeaways \u2013- were seen as beneficial across the design process, while writing narratives showed promise mainly for the creation stage. Although concerns around potential bias during data exploration were raised, participants also discussed strategies to mitigate such concerns. This paper contributes to a deeper understanding of the interplay between language and visualization, and proposes a straightforward, lightweight addition to the visualization design process.","authors":[{"affiliations":["UC Berkeley, Berkeley, United States"],"email":"chase_stokes@berkeley.edu","is_corresponding":true,"name":"Chase Stokes"},{"affiliations":["Self, Berkeley, United States"],"email":"clarahu@berkeley.edu","is_corresponding":false,"name":"Clara Hu"},{"affiliations":["UC Berkeley, Berkeley, United States"],"email":"hearst@berkeley.edu","is_corresponding":false,"name":"Marti Hearst"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1140","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"It's a Good Idea to Put It Into Words: Writing 'Rudders' in the Initial Stages of Visualization Design","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1142","abstract":"To deploy machine learning (ML) models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many compression experiments, identifying subtle changes in model behavior, and negotiating complex accuracy-efficiency trade-offs. However, existing compression tools poorly support comparison, leading to tedious and, sometimes, incomplete analyses spread across disjoint tools. To support real-world comparative workflows, we develop an interactive visual system called Compress & Compare. Within a single interface, Compress & Compare surfaces promising compression strategies by visualizing provenance relationships between compressed models and reveals compression-induced behavior changes by comparing models' predictions, weights, and activations. We demonstrate how Compress & Compare supports common compression analysis tasks through two case studies\u2014debugging failed compression on generative language models and identifying compression-induced biases in image classification. We further evaluate Compress & Compare in a user study with eight compression experts, illustrating its potential to provide structure to compression workflows, help practitioners build intuition about compression, and encourage thorough analysis of compression\u2019s effect on model behavior. Through these evaluations, we identify compression-specific challenges that future visual analytics tools should consider and Compress & Compare visualizations that may generalize to broader model comparison tasks.","authors":[{"affiliations":["Massachusetts Institute of Technology, Cambridge, United States"],"email":"aboggust@mit.edu","is_corresponding":true,"name":"Angie Boggust"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"vsivaram@andrew.cmu.edu","is_corresponding":false,"name":"Venkatesh Sivaraman"},{"affiliations":["Apple, Cambridge, United States"],"email":"yassogba@gmail.com","is_corresponding":false,"name":"Yannick Assogba"},{"affiliations":["Apple, Seattle, United States"],"email":"donghao@apple.com","is_corresponding":false,"name":"Donghao Ren"},{"affiliations":["Apple, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["Apple, Seattle, United States"],"email":"fred.hohman@gmail.com","is_corresponding":false,"name":"Fred Hohman"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1142","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1147","abstract":"Large Language Models (LLMs) like GPT-4 which support multimodal input (i.e., prompts containing images in addition to text) have immense potential to advance visualization research. However, many questions exist about the visual capabilities of such models, including how well they can read and interpret visually represented data. In our work, we address this question by evaluating the GPT-4 multimodal LLM using a suite of task sets meant to assess the model\u2019s visualization literacy. The task sets are based on existing work in the visualization community addressing both automated chart question answering and human visualization literacy across multiple settings. Our assessment finds that GPT-4 can perform tasks such as recognizing trends and extreme values, and also demonstrates some understanding of visualization design best-practices. By contrast, GPT-4 struggles with simple value retrieval when not provided with the original dataset, lacks the ability to reliably distinguish between colors in charts, and occasionally suffers from hallucination and inconsistency. We conclude by reflecting on the model\u2019s strengths and weaknesses as well as the potential utility of models like GPT-4 for future visualization research. We also release all code, stimuli, and results for the task sets at the following link: (REDACTED FOR REVIEW)","authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"abendeck3@gatech.edu","is_corresponding":true,"name":"Alexander Bendeck"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"john.stasko@cc.gatech.edu","is_corresponding":false,"name":"John Stasko"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1147","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1150","abstract":"Composite visualization represents a widely embraced design that combines multiple visual representations to create an integrated view. However, the traditional approach of creating composite visualizations in immersive environments typically occurs asynchronously outside of the immersive space and is carried out by experienced experts. In this work, we take the first step to empower users to participate in the creation of composite visualization within immersive environments through embodied interactions. This could provide a flexible and fluid experience for data exploration and facilitate a deep understanding of the relationship between data visualizations. We begin with forming a design space of embodied interactions to create various types of composite visualizations with the consideration of data relationships. Drawing inspiration from people's natural experience of manipulating physical objects, we design interactions to directly assemble composite visualizations in immersive environments. Building upon the design space, we present a series of case studies showcasing the interactive method to create different kinds of composite visualizations in Virtual Reality (VR). Subsequently, we conduct a user study to evaluate the usability of the derived interaction techniques and user experience of embodiedly creating composite visualizations. We find that empowering users to participate in composite visualizations through embodied interactions enables them to flexibly leverage different visualization representations for understanding and communicating the relationships between different views, which underscores the potential for a set of application scenarios in the future.","authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China","The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"qzhual@connect.ust.hk","is_corresponding":true,"name":"Qian Zhu"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States","Georgia Institute of Technology, Atlanta, United States"],"email":"luttul@umich.edu","is_corresponding":false,"name":"Tao Lu"},{"affiliations":["Adobe Research, San Jose, United States","Adobe Research, San Jose, United States"],"email":"sguo@adobe.com","is_corresponding":false,"name":"Shunan Guo"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong","Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"mxj@cse.ust.hk","is_corresponding":false,"name":"Xiaojuan Ma"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States","Georgia Institute of Technology, Atlanta, United States"],"email":"yalongyang@hotmail.com","is_corresponding":false,"name":"Yalong Yang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1150","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1153","abstract":"Points of interest on a map such as restaurants, hotels, or subway stations, give rise to categorical point data: data that have a fixed location and one or more categorical attributes. Consequently, recent years have seen various set visualization approaches that visually connect points of the same category to support users in understanding the spatial distribution of categories. Existing methods use complex and often highly irregular shapes to connect points of the same category, leading to high cognitive load for the user. In this paper we introduce SimpleSets that use simple shapes to enclose categorical point patterns and provide a low-complexity overview of the data distribution. We give formal definitions of point patterns that correspond to simple shapes and describe an algorithm that partitions categorical points into few such patterns. Our second contribution is a rendering algorithm that transforms a given partition into a clean set of shapes resulting in an aesthetically pleasing set visualization. Our algorithm pays particular attention to resolving intersections between nearby shapes in a consistent manner. We compare SimpleSets to the state-of-the-art set visualizations using standard datasets from the literature. SimpleSets are designed to visualize disjoint categories, however, we discuss avenues to extend our technique to overlapping set systems.","authors":[{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"s.w.v.d.broek@tue.nl","is_corresponding":true,"name":"Steven van den Broek"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"w.meulemans@tue.nl","is_corresponding":false,"name":"Wouter Meulemans"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"b.speckmann@tue.nl","is_corresponding":false,"name":"Bettina Speckmann"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1153","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"SimpleSets: Capturing Categorical Point Patterns with Simple Shapes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1155","abstract":"Interactive visualizations are powerful tools for Exploratory Data Analysis (EDA), but how do they affect the observations analysts make about their data? We conducted a qualitative experiment with 13 professional data scientists analyzing two datasets within Jupyter notebooks, collecting a rich dataset of interaction traces and think-aloud utterances. By qualitatively analyzing participant verbalizations, we introduce the concept of \"observation-analysis states.\" These states capture both the dataset characteristics a participant focuses on and the insights they express. Our definition reveals that interactive visualizations on average lead to earlier and more complex insights about relationships between dataset attributes compared to static visualizations. Moreover, this process identified new measures for studying representation use in notebooks such as hover time, revisiting rate and representational diversity. In particular, revisiting rates revealed behavior where analysts revisit particular representations throughout the time course of an analysis, serving more as navigational aids through an EDA than as strict hypothesis answering tools. We show how these measures helped identify other patterns of analysis behavior, such as the \"80-20 rule\", where a small subset of representations drove the majority of observations. Based on these findings, we offer design guidelines for interactive exploratory analysis tooling and reflect on future directions for studying the role that visualizations play in EDA.","authors":[{"affiliations":["MIT, Cambridge, United States"],"email":"dwootton@mit.edu","is_corresponding":true,"name":"Dylan Wootton"},{"affiliations":["MIT, Cambridge, United States"],"email":"amyraefoxphd@gmail.com","is_corresponding":false,"name":"Amy Rae Fox"},{"affiliations":["University of Colorado Boulder, Boulder, United States"],"email":"evan.peck@colorado.edu","is_corresponding":false,"name":"Evan Peck"},{"affiliations":["MIT, Cambridge, United States"],"email":"arvindsatya@mit.edu","is_corresponding":false,"name":"Arvind Satyanarayan"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1155","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1179","abstract":"Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful tools for solving complex optimization problems characterized by multiple, often conflicting, objectives. While advancements have been made in computational efficiency as well as diversity and convergence of solutions, a critical challenge persists: the internal evolutionary mechanisms are opaque to human users. Drawing upon the successes of explainable AI in explaining complex algorithms and models, we argue that the need to understand the underlying evolutionary operators and population dynamics in MOEAs aligns well with a visual analytics paradigm. This paper introduces ParetoTracker, a visual analytics framework designed to support the comprehension and inspection of population dynamics in the evolutionary processes of MOEAs. Informed by preliminary literature review and expert interviews, the framework establishes a multi-level analysis scheme, which caters to user engagement and exploration ranging from examining overall trends in performance metrics to conducting fine-grained inspections of evolutionary operations. In contrast to conventional practices that require manual plotting of solutions for each generation, ParetoTracker facilitates the examination of temporal trends and dynamics across consecutive generations in an integrated visual interface. The effectiveness of the framework is demonstrated through case studies and expert interviews focused on widely adopted benchmark optimization problems.","authors":[{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"zhangzr32021@mail.sustech.edu.cn","is_corresponding":false,"name":"Zherui Zhang"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"yangf2020@mail.sustech.edu.cn","is_corresponding":false,"name":"Fan Yang"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"ranchengcn@gmail.com","is_corresponding":false,"name":"Ran Cheng"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"mayx@sustech.edu.cn","is_corresponding":true,"name":"Yuxin Ma"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1179","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1185","abstract":"This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who are unfamiliar with these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn unfamiliar network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then mines the underlying data patterns, and eventually explains both visual and data patterns present in the viewer\u2019s selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to only textual and only visual (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology.","authors":[{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":true,"name":"Xinhuan Shu"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"alexis.pister@hotmail.com","is_corresponding":false,"name":"Alexis Pister"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"tangjunxiu@zju.edu.cn","is_corresponding":false,"name":"Junxiu Tang"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":false,"name":"Fanny Chevalier"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1185","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1193","abstract":"Emerging multimodal large language models (MLLMs) exhibit great potential for chart question answering (CQA). Recent efforts primarily focus on scaling up training datasets (\\ie, charts, data tables, and question-answer (QA) pairs) through data collection and synthesis. However, our empirical study on existing MLLMs and CQA datasets reveals notable gaps. First, current data collection and synthesis focus on data volume and lack consideration of fine-grained visual encodings and QA tasks, resulting in unbalanced data distribution divergent from practical CQA scenarios. Second, existing work follows the training recipe of the base MLLMs initially designed for natural images, under-exploring the adaptation to unique chart characteristics, such as rich text elements. To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. Specifically, we propose a novel data engine to effectively filter diverse and high-quality data from existing datasets and subsequently refine and augment the data using LLM-based generation techniques to better align with practical QA tasks and visual encodings. Then, to facilitate the adaptation to chart characteristics, we utilize the enriched data to train an MLLM by unfreezing the vision encoder and incorporating a mixture-of-resolution adaptation strategy for enhanced fine-grained recognition. Experimental results validate the effectiveness of our approach. Even with fewer training examples, our model consistently outperforms state-of-the-art CQA models on established benchmarks. We also contribute a dataset split as a benchmark for future research.","authors":[{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"xingchen.zeng@outlook.com","is_corresponding":true,"name":"Xingchen Zeng"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"hlin386@connect.hkust-gz.edu.cn","is_corresponding":false,"name":"Haichuan Lin"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"yyebd@connect.ust.hk","is_corresponding":false,"name":"Yilin Ye"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China","The Hong Kong University of Science and Technology, Hong Kong SAR, China"],"email":"weizeng@hkust-gz.edu.cn","is_corresponding":false,"name":"Wei Zeng"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1193","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1202","abstract":"The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and visual data analysis tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with several variables including personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual's personality traits.","authors":[{"affiliations":["Emory University, Atlanta, United States"],"email":"mengyu.chen@emory.edu","is_corresponding":true,"name":"Mengyu Chen"},{"affiliations":["Emory University, Atlanta, United States"],"email":"yijun.liu2@emory.edu","is_corresponding":false,"name":"Yijun Liu"},{"affiliations":["Emory University, Atlanta, United States"],"email":"emily.wall@emory.edu","is_corresponding":false,"name":"Emily Wall"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1202","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1204","abstract":"We present ProvenanceWidgets, a Javascript library of UI control elements such as radio buttons, checkboxes, and dropdowns to track and dynamically overlay a user's analytic provenance. These in situ overlays not only save screen space but also minimize the amount of time and effort needed to access the same information from elsewhere in the UI. In this paper, we discuss how we design modular UI control elements to track how often and how recently a user interacts with them and design visual overlays showing an aggregated summary as well as a detailed temporal history. We demonstrate the capability of ProvenanceWidgets by recreating three prior widget libraries: (1) Scented Widgets, (2) Phosphor objects, and (3) Dynamic Query Widgets. We also evaluated its expressiveness and conducted case studies with visualization developers to evaluate its effectiveness. We find that ProvenanceWidgets enables developers to implement custom provenance-tracking applications effectively. ProvenanceWidgets is available as open-source software at https://github.com/ProvenanceWidgets to help application developers build custom provenance-based systems.","authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"arpitnarechania@gatech.edu","is_corresponding":true,"name":"Arpit Narechania"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"kaustubhodak1@gmail.com","is_corresponding":false,"name":"Kaustubh Odak"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"melassady@ai.ethz.ch","is_corresponding":false,"name":"Mennatallah El-Assady"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"endert@gatech.edu","is_corresponding":false,"name":"Alex Endert"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1204","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1214","abstract":"Graphs are often used to model relationships between entities. The identification and visualization of clusters in graphs enable insight discovery in many application areas, such as life sciences and social sciences. Force-directed graph layout algorithms promote the visual saliency of clusters, as they generally bring adjacent nodes closer together, and push non-adjacent nodes apart. In this work, we study the impact of node ordering on the visual saliency of clusters in orderable node-link diagrams, namely radial diagrams, arc diagrams and symmetric arc diagrams. Through a crowdsourced controlled experiment, we show that users can count clusters consistently more accurately, and to a large extent faster, with orderable node-link diagrams than with three state-of-the art force-directed layout algorithms, i.e., `Linlog', `Backbone' and, `sfdp'. The measured advantage is greater in case of low cluster separability and/or low compactness. A free copy of this paper and all supplemental materials are available at https://osf.io/kc3dg/?view_only=892f7b96752e40a6baefb2e50e866f9d","authors":[{"affiliations":["Luxembourg Institute of Science and Technology, Esch-sur-Alzette, Luxembourg"],"email":"nora.alnaami@list.lu","is_corresponding":false,"name":"Nora Al-Naami"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"nicolas.medoc@list.lu","is_corresponding":false,"name":"Nicolas Medoc"},{"affiliations":["Uppsala University, Uppsala, Sweden"],"email":"matteo.magnani@it.uu.se","is_corresponding":false,"name":"Matteo Magnani"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"mohammad.ghoniem@list.lu","is_corresponding":true,"name":"Mohammad Ghoniem"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1214","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1218","abstract":"Placing text labels is a common way to explain key elements in a given scene. Given a graphic input and original label information, how to place labels to meet both geometric and aesthetic requirements is an open challenging problem. Geometry-wise, traditional rule-driven solutions struggle to capture the complex interactions between labels, let alone consider graphical/appearance content. In terms of aesthetics, training/evaluation data ideally require nontrivial effort and expertise in design, thus resulting in a lack of decent datasets for learning-based methods. To address the above challenges, we formulate the task with a graph representation, where nodes correspond to labels and edges to the between-label interactions, and treat label placement as a node position prediction problem. With this novel representation, we design a Label Placement Graph Transformer (LPGT) to predict label positions. Specifically, edge-level attention, conditioned on node representations, is introduced to reveal potential relationships between labels. To integrate graphic/image information, we design a feature aligning strategy that extracts deep features for nodes and edges efficiently. Next, to address the dataset issue, we collect commercial illustrations with professionally designed label layouts from household appliance manuals, and annotate them with useful information to create a novel dataset named the Appliance Manual Illustration Labels (AMIL) dataset. In the thorough evaluation on AMIL, our LPGT solution achieves promising label placement performance compared with popular baselines.","authors":[{"affiliations":["Southwest University, Beibei, China"],"email":"qujingwei@swu.edu.cn","is_corresponding":true,"name":"Jingwei Qu"},{"affiliations":["Southwest University, Chongqing, China"],"email":"z2211973606@email.swu.edu.cn","is_corresponding":false,"name":"Pingshun Zhang"},{"affiliations":["Southwest University, Beibei, China"],"email":"enyuche@gmail.com","is_corresponding":false,"name":"Enyu Che"},{"affiliations":["COLLEGE OF COMPUTER AND INFORMATION SCIENCE, SOUTHWEST UNIVERSITY SCHOOL OF SOFTWAREC, Chongqin, China"],"email":"out1147205215@outlook.com","is_corresponding":false,"name":"Yinan Chen"},{"affiliations":["Stony Brook University, New York, United States"],"email":"hling@cs.stonybrook.edu","is_corresponding":false,"name":"Haibin Ling"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1218","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Graph Transformer for Label Placement","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1232","abstract":"How do cancer cells grow, divide, proliferate and die? How do drugs influence these processes? These are difficult questions that we can attempt to answer with a combination of time-series microscopy experiments, classification algorithms, and data visualization. However, collecting this type of data and applying algorithms to segment and track cells and construct lineages of proliferation is error-prone; and identifying the errors can be challenging since it often requires cross-checking multiple data types. Similarly, analyzing and communicating the results necessitates synthesizing different data types into a single narrative. State-of-the-art visualization methods for such data use independent line charts, tree diagrams, and images in separate views. However, this spatial separation requires the viewer of these charts to combine the relevant pieces of data in memory. To simplify this challenging task, we describe design principles for weaving cell images, time-series data, and tree data into a cohesive visualization. Our design principles are based on choosing a primary data type that drives the layout and integrates the other data types into that layout. We then introduce Aardvark, a system that uses these principles to implement novel visualization techniques. Based on Aardvark, we demonstrate the utility of each of these approaches for discovery, communication, and data debugging in a series of case studies.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"devin@sci.utah.edu","is_corresponding":true,"name":"Devin Lange"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"robert.judson-torres@hci.utah.edu","is_corresponding":false,"name":"Robert L Judson-Torres"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"tzangle@chemeng.utah.edu","is_corresponding":false,"name":"Thomas A Zangle"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"alex@sci.utah.edu","is_corresponding":false,"name":"Alexander Lex"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1232","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Aardvark: Composite Visualizations of Trees, Time-Series, and Images","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1251","abstract":"Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks that lead to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook history, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only improves the reproducibility of notebooks, but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate utility and potential impact of our approach in two use cases and feedback from notebook users from a range of backgrounds.","authors":[{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"klaus@eckelt.info","is_corresponding":true,"name":"Klaus Eckelt"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"kirangadhave2@gmail.com","is_corresponding":false,"name":"Kiran Gadhave"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"alex@sci.utah.edu","is_corresponding":false,"name":"Alexander Lex"},{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"marc.streit@jku.at","is_corresponding":false,"name":"Marc Streit"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1251","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1256","abstract":"People commonly utilize visualizations not only to examine a given dataset, but also to draw generalizable conclusions about the underlying models or phenomena. Previous research has compared human visual inference to that of an optimal Bayesian agent, with deviations from rational analysis viewed as problematic. However, human reliance on non-normative heuristics may prove advantageous in certain circumstances. We investigate scenarios where human intuition might surpass idealized statistical rationality. In two experiments, we examine individuals' accuracy in characterizing the parameters of known data-generating models from bivariate visualizations. Our findings indicate that, although participants generally exhibited lower accuracy compared to statistical models, they frequently outperformed Bayesian agents, particularly when faced with extreme samples. Participants appeared to rely on their internal models to filter out noisy visualizations, thus improving their resilience against spurious data. However, participants displayed overconfidence and struggled with uncertainty estimation. They also exhibited higher variance than statistical machines. Our findings suggest that analyst gut reactions to visualizations may provide an advantage, even when departing from rationality. These results carry implications for designing visual analytics tools, offering new perspectives on how to integrate statistical models and analyst intuition for improved inference and decision-making.","authors":[{"affiliations":["Indiana University, Indianapolis, United States"],"email":"rkoonch@iu.edu","is_corresponding":true,"name":"Ratanond Koonchanok"},{"affiliations":["Argonne National Laboratory, Lemont, United States","University of Illinois Chicago, Chicago, United States"],"email":"papka@anl.gov","is_corresponding":false,"name":"Michael E. Papka"},{"affiliations":["Indiana University, Indianapolis, United States"],"email":"redak@iu.edu","is_corresponding":false,"name":"Khairi Reda"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1256","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1258","abstract":"Providing effective guidance for users has long been an important and challenging task for efficient exploratory visual analytics, especially when selecting variables for visualization in high-dimensional datasets. Correlation is the most widely applied metric for guidance in statistical and analytical tools, however a reliance on correlation may lead users towards false positives when interpreting causal relations in the data. In this work, inspired by prior insights on the benefits of counterfactual visualization in supporting visual causal inference, we propose a novel, simple, and efficient counterfactual guidance method to enhance causal inference performance in guided exploratory analytics based on insights and concerns gathered from expert interviews. Our technique aims to capitalize on the benefits of counterfactual approaches while reducing their complexity for users. We integrated counterfactual guidance into an exploratory visual analytics system, and using a synthetically generated ground-truth causal dataset, conducted a comparative user study and evaluated to what extent counterfactual guidance can help lead users to more precise visual causal inferences. The results suggest that counterfactual guidance improved visual causal inference performance, and also led to different exploratory behaviors compared to correlation-based guidance. Based on these findings, we offer future directions to incorporate and examine counterfactual guidance to better support exploratory visual analytics.","authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":true,"name":"Arran Zeyu Wang"},{"affiliations":["UNC-Chapel Hill, Chapel Hill, United States"],"email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1258","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1272","abstract":"In various scientific and industrial domains, analyzing multivariate spatial data, i.e., vectors associated with spatial locations, is common practice. To analyze those datasets, analysts may turn to models such as Spatial Blind Source Separation (SBSS). Designed explicitly for spatial data analysis, SBSS finds latent components in the dataset and is superior to popular non-spatial models, like PCA. However, when analysts try different tuning parameter settings, the amount of latent components complicates analytical tasks. Based on our years-long collaboration with SBSS researchers, we propose a visualization approach to tackle this challenge. The main component is UnDRground Tubes (UT), a general-purpose idiom combining ideas from set visualization and multidimensional projections. We describe the UT visualization pipeline and integrate UT into an interactive multiple-view system. We demonstrate its effectiveness through interviews with SBSS experts, a qualitative evaluation with visualization experts, and computational experiments. SBSS experts were excited about our approach. They saw many benefits for their work and potential applications for geostatistical data analysis more generally. UT was also very well received by visualization experts. Our benchmarks show that UT projections and its heuristics are appropriate.","authors":[{"affiliations":["TU Wien, Vienna, Austria"],"email":"nikolaus.piccolotto@tuwien.ac.at","is_corresponding":true,"name":"Nikolaus Piccolotto"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"mwallinger@ac.tuwien.ac.at","is_corresponding":false,"name":"Markus Wallinger"},{"affiliations":["Institute of Visual Computing and Human-Centered Technology, Vienna, Austria"],"email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"markus.boegl@tuwien.ac.at","is_corresponding":false,"name":"Markus B\u00f6gl"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1272","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1275","abstract":"We developed and validated an instrument to measure the perceived readability in data visualization: PREVis. Researchers and practitioners can easily use this instrument as part of their evaluations to compare the perceived readability of different visual data representations. Our instrument can complement results from controlled experiments on user task performance or provide additional data during in-depth qualitative work such as design iterations when developing a new technique. Although readability is recognized as an essential quality of data visualizations, so far there has not been a unified definition of the construct in the context of visual representations. As a result, researchers often lack guidance for determining how to ask people to rate their perceived readability of a visualization. To address this issue, we engaged in a rigorous process to develop the first validated instrument targeted at the subjective readability of visual data representations. Our final instrument consists of 11 items across 4 dimensions: understandability, layout clarity, readability of data values, and readability of data patterns. We provide the questionnaire as a document with implementation guidelines on osf.io/9cg8j. Beyond this instrument, we contribute a discussion of how researchers have previously assessed visualization readability, and an analysis of the factors underlying perceived readability in visual data representations.","authors":[{"affiliations":["LISN, Universit\u00e9 Paris Saclay, CNRS, Orsay, France","Aviz, Inria, Saclay, France"],"email":"acabouat@gmail.com","is_corresponding":true,"name":"Anne-Flore Cabouat"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tingying.he@inria.fr","is_corresponding":false,"name":"Tingying He"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1275","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"PREVis: Perceived Readability Evaluation for Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1277","abstract":"This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.","authors":[{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":true,"name":"Tushar M. Athawale"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"wangz@ornl.gov","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kmorel@acm.org","is_corresponding":false,"name":"Kenneth Moreland"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"gongq@ornl.gov","is_corresponding":false,"name":"Qian Gong"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"klasky@ornl.gov","is_corresponding":false,"name":"Scott Klasky"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"paul.rosen@utah.edu","is_corresponding":false,"name":"Paul Rosen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1277","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1281","abstract":"Participatory budgeting (PB) is a democratic approach to allocating municipal spending that has been adopted in many places in recent years, including in Chicago. Current PB voting resembles a ballot where residents are asked which municipal projects, such as school improvements and road repairs, to fund with a limited budget. In this work, we ask how interactive visualization can benefit PB by conducting a design probe-based interview study (N=13) with policy workers and academics with expertise in PB, urban planning, and civic HCI. Our probe explores how graphical elicitation of voter preferences and a dashboard of voting statistics can be incorporated into a realistic PB tool. Through qualitative analysis, we find that visualization creates opportunities for city government to set expectations about budget constraints while also granting their constituents greater freedom to articulate a wider range of preferences. However, using visualization to provide transparency about PB requires efforts to mitigate potential access barriers and mistrust. We call for more visualization professionals to help build civic capacity by working in and studying political systems.","authors":[{"affiliations":["University of Chicago, Chicago, United States"],"email":"kalea@uchicago.edu","is_corresponding":true,"name":"Alex Kale"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"danni6@uchicago.edu","is_corresponding":false,"name":"Danni Liu"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"mariagabrielaa@uchicago.edu","is_corresponding":false,"name":"Maria Gabriela Ayala"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"hwschwab@uchicago.edu","is_corresponding":false,"name":"Harper Schwab"},{"affiliations":["University of Washington, Seattle, United States","University of Utah, Salt Lake City, United States"],"email":"mcnutt.andrew@gmail.com","is_corresponding":false,"name":"Andrew M McNutt"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1281","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"What Can Interactive Visualization do for Participatory Budgeting in Chicago?","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1288","abstract":"Data tables are one of the most common ways in which people encounter data. Although mostly built with text and numbers, data tables have a spatial layout and often exhibit visual elements meant to facilitate their reading. Surprisingly, there is an empirical knowledge gap on how people read and use tables and how different visual aids affect people's ability to use them. In this work, we seek to address this vacuum through a controlled study. We asked participants to repeatedly perform four different tasks with tables in four table representation conditions (plain tables, tables with zebra striping, tables with cell background color encoding cell value, and tables with background bar length in a cell encoding cell value). We analyzed completion time, error rate, gaze-tracking data, mouse movement and participant preferences. We found that visual encodings help for finding maximum values (especially color), but not as much as zebra striping helps in a complex task (comparison of proportional differences). We also characterize typical human behavior for the different tasks. These findings can inform the design of tables and research directions for improving presentation of data in tabular form.","authors":[{"affiliations":["University of Victoria, Victoria, Canada"],"email":"yongfengji@uvic.ca","is_corresponding":false,"name":"YongFeng Ji"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"nacenta@gmail.com","is_corresponding":false,"name":"Miguel A Nacenta"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1288","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"The Effect of Visual Aids on Reading Numeric Data Tables","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1290","abstract":"Visualization linters are end-user facing evaluators that automatically identify potential chart issues. These spell-checker like systems offer a blend of interpretability and customization that is not found in other forms of automated assistance. However, existing linters do not model context and have primarily targeted users who do not need assistance, resulting in obvious---even annoying---advice. We investigate these issues within the domain of color palette design, which serves as a microcosm of visualization design concerns. We contribute a GUI-based color palette linter as a design probe that covers perception, accessibility, context, and other design criteria, and use it to explore visual explanations, integrated fixes, and user-defined linting rules. Through a formative interview study and theory-driven analysis, we find that linters can be meaningfully integrated into graphical contexts thereby addressing many of their core issues. We discuss implications for integrating linters into visualization tools, developing improved assertion languages, and supporting end-user tunable advice---all laying the groundwork for more effective visualization linters in any context.","authors":[{"affiliations":["University of Washington, Seattle, United States","University of Utah, Salt Lake City, United States"],"email":"mcnutt.andrew@gmail.com","is_corresponding":true,"name":"Andrew M McNutt"},{"affiliations":["University of Washington, Seattle, United States"],"email":"maureen.stone@gmail.com","is_corresponding":false,"name":"Maureen Stone"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1290","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Mixing Linters with GUIs: A Color Palette Design Probe","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1291","abstract":"Emotion is an important factor to consider when designing visualizations as it can impact the amount of trust viewers place in a visualization, how well they can retrieve information and understand the underlying data, and how much they engage with or connect to a visualization. We conducted five crowdsourced experiments to quantify the effects of color, chart type, data trend, data variability and data density on emotion (measured through self-reported arousal and valence). Results from our experiments show that there are multiple design elements which influence the emotion induced by a visualization and, more surprisingly, that certain data characteristics influence the emotion of viewers even when the data has no meaning. In light of these findings, we offer guidelines on how to use color, scale, and chart type to counterbalance and emphasize the emotional impact of immutable data characteristics.","authors":[{"affiliations":["University of Waterloo, Waterloo, Canada","University of Victoria, Victoria, Canada"],"email":"cartergblair@gmail.com","is_corresponding":false,"name":"Carter Blair"},{"affiliations":["University of Victoria, Victoira, Canada","Delft University of Technology, Delft, Netherlands"],"email":"xiyao.wang23@gmail.com","is_corresponding":false,"name":"Xiyao Wang"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1291","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1295","abstract":"Annotations play a vital role in highlighting critical aspects of visualizations, aiding in data externalization and exploration, collaborative data analysis, and visual storytelling. However, despite their widespread use, we identified a lack of a design space for common practices for annotations. In this paper, we evaluated over 1,800 static annotated charts to understand how people annotate visualizations in practice. Through qualitative coding of these diverse real-world annotated charts, we explore three primary aspects of annotation usage patterns: analytic purposes for chart annotations (e.g., present, identify, summarize, or compare data features), mechanisms for chart annotations (e.g., types and combinations of annotations used, frequency of different annotation types across chart types, etc.), and the data source used to generate the annotations. We then synthesized our findings into a design space of annotations, highlighting key design choices for chart annotations. We presented three case studies illustrating our design space as a practical framework for chart annotations to enhance the communication of visualization insights.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States","University of Utah, Salt Lake City, United States"],"email":"dilshadur@sci.utah.edu","is_corresponding":true,"name":"Md Dilshadur Rahman"},{"affiliations":["University of Oklahoma, Norman, United States","University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":false,"name":"Ghulam Jilani Quadri"},{"affiliations":["University of South Florida , Tampa, United States","University of South Florida , Tampa, United States"],"email":"bdoppalapudi@usf.edu","is_corresponding":false,"name":"Bhavana Doppalapudi"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States","University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"},{"affiliations":["University of Utah, Salt Lake City, United States","University of Utah, Salt Lake City, United States"],"email":"paul.rosen@utah.edu","is_corresponding":false,"name":"Paul Rosen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1295","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1302","abstract":"We present the results of an exploratory study on how pairs interact with speech commands and touch gestures on a wall-sized display during a collaborative sensemaking task. Previous work has shown that speech commands, alone or in combination with other input modalities, can support visual data exploration by individuals. However, it is still unknown whether and how speech commands can be used in collaboration, and for what tasks. To answer these questions, we developed a functioning prototype that we used as a technology probe. We conducted an in-depth exploratory study with 20 participants (10 pairs) to analyze their interaction choices, the interplay between the input modalities, and their collaboration. While touch was the most used modality, we found that participants preferred speech commands for global operations, used them for distant interaction, and that speech interaction contributed to the awareness of the partner\u2019s actions. Furthermore, the likelihood of using speech commands during collaboration was related to the personality trait of agreeableness. Regarding collaboration styles, participants interacted with speech equally often whether they were in loosely or closely coupled collaboration. While the partners stood closer to each other during close collaboration, they did not walk away from their partner to use speech commands. From our findings, we derive and contribute a set of design considerations for collaborative and multimodal interactive data analysis systems.","authors":[{"affiliations":["University of Bremen, Bremen, Germany","University of Bremen, Bremen, Germany"],"email":"molina@uni-bremen.de","is_corresponding":true,"name":"Gabriela Molina Le\u00f3n"},{"affiliations":["LISN, Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"anastasia.bezerianos@universite-paris-saclay.fr","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":["Inria, Palaiseau, France"],"email":"olivier.gladin@inria.fr","is_corresponding":false,"name":"Olivier Gladin"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1302","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1307","abstract":"Building information modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, building energy modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building\u2019s energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identification, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and understanding throughout the conversion process. By evaluating user feedback, we could show that BEMTrace can solve domain-specific tasks.","authors":[{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"walch@vrvis.at","is_corresponding":false,"name":"Andreas Walch"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"szabo@vrvis.at","is_corresponding":false,"name":"Attila Szabo"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"hs@vrvis.at","is_corresponding":false,"name":"Harald Steinlechner"},{"affiliations":["Independent Researcher, Vienna, Austria"],"email":"thomas@ortner.fyi","is_corresponding":false,"name":"Thomas Ortner"},{"affiliations":["Institute of Visual Computing "," Human-Centered Technology, Vienna, Austria"],"email":"groeller@cg.tuwien.ac.at","is_corresponding":false,"name":"Eduard Gr\u00f6ller"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"johanna.schmidt@vrvis.at","is_corresponding":true,"name":"Johanna Schmidt"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1307","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1309","abstract":"Visualizations play a critical role in validating and improving statistical models. However, the design space of model check visualizations is not well understood, making it difficult for authors to explore and specify effective graphical model checks. VMC defines a model check visualization using four components: (1) samples of distributions of checkable quantities generated from the model, including predictive distributions for new data and distributions of model parameters; (2) transformations on observed data to facilitate comparison; (3) visual representations of distributions; and (4) layouts to facilitate comparing model samples and observed data. We contribute an implementation of VMC as an R package. We validate VMC by reproducing a set of canonical model check examples, and show how using VMC to generate model checks reduces the edit distance between visualizations relative to existing visualization toolkits. The findings of an interview study with three expert modelers who used VMC highlight challenges and opportunities for encouraging exploration of correct, effective model check visualizations.","authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"ziyangguo1030@gmail.com","is_corresponding":true,"name":"Ziyang Guo"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"kalea@uchicago.edu","is_corresponding":false,"name":"Alex Kale"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"jhullman@northwestern.edu","is_corresponding":false,"name":"Jessica Hullman"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1309","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"VMC: A Grammar for Visualizing Statistical Model Checks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1316","abstract":"We apply an approach from cognitive linguistics by mapping Conceptual Metaphor Theory (CMT) to the visualization domain to address patterns of visual conceptual metaphors that are often used in science infographics. Metaphors play an essential part in visual communication and are frequently employed to explain complex concepts. However, their use is often based on intuition, rather than following a formal process. At present, we lack tools and language for understanding and describing metaphor use in visualization to the extent where taxonomy and grammar could guide the creation of visual components, e.g., infographics. Our classification of the visual conceptual mappings within scientific representations is based on the breakdown of visual components in existing scientific infographics. We demonstrate the development of this mapping through a detailed analysis of data collected from four domains (biomedicine, climate, space, and anthropology) that represent a diverse range of visual conceptual metaphors used in the visual communication of science. This work allows us to identify patterns of visual conceptual metaphor use within the domains, resolve ambiguities about why specific conceptual metaphors are used, and develop a better overall understanding of visual metaphor use in scientific infographics. Our analysis shows that ontological and orientational conceptual metaphors are the most widely applied to translate complex scientific concepts. To support our findings we developed a visual exploratory tool based on the collected database that places the individual infographics on a spatio-temporal scale and illustrates the breakdown of visual conceptual metaphors.","authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"hana.pokojna@gmail.com","is_corresponding":true,"name":"Hana Pokojn\u00e1"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["University of Rostock, Rostock, Germany"],"email":"stefan.bruckner@gmail.com","is_corresponding":false,"name":"Stefan Bruckner"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"},{"affiliations":["University of Bergen, Bergen, Norway","Haukeland University Hospital, University of Bergen, Bergen, Norway"],"email":"laura.garrison@uib.no","is_corresponding":false,"name":"Laura Garrison"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1316","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1318","abstract":"In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models' analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments--from initial exploration to detailed analysis--we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates their applicability in addressing the pressing concern of misleading charts.","authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"yhload@cse.ust.hk","is_corresponding":true,"name":"Leo Yu-Ho Lo"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1318","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1325","abstract":"Dynamic data visualizations can convey large amounts of information over time, such as using motion to depict changes in data values for multiple entities. Such dynamic displays put a demand on our visual processing capacities, yet our perception of motion is limited. When tracking multiple objects across space and time, humans can typically track up to four objects, and the capacity is even lower if we also need to remember the history of the objects\u2019 features. Several techniques have been shown to improve the processing of dynamic displays. Staging the animation to sequentially show steps in a transition and tracing object movement by displaying trajectory histories can increase processing by reducing the cognitive load. In this paper, We examine the effectiveness of staging and tracing in dynamic displays. We showed participants animated line charts depicting the movements of lines and asked them to identify the line with the highest mean and variance. We manipulated the animation to display the lines with or without staging, tracing and history, and compared the results to a static chart as a control. Results showed that tracing and staging are preferred by participants, and improve their performance in mean and variance tasks respectively. The preferred display time 3 times shorter when staging is used. Also, encoding animation speed with mean and variance in congruent tasks is associated with higher accuracy. These findings help inform real-world best practices for building dynamic displays that leverage the strength of humans' visual processing.","authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"shu343@gatech.edu","is_corresponding":true,"name":"Songwen Hu"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"ouxunjiang@u.northwestern.edu","is_corresponding":false,"name":"Ouxun Jiang"},{"affiliations":["Dolby Laboratories Inc., San Francisco, United States"],"email":"jcr@dolby.com","is_corresponding":false,"name":"Jeffrey Riedmiller"},{"affiliations":["Georgia Tech, Atlanta, United States","University of Massachusetts Amherst, Amherst, United States"],"email":"cxiong@gatech.edu","is_corresponding":false,"name":"Cindy Xiong Bearfield"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1325","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1326","abstract":"Evaluating the quality of text responses generated by large language models (LLMs) poses unique challenges compared to traditional machine learning. While automatic side-by-side evaluation has emerged as a promising approach, LLM developers face scalability and interpretability challenges in analyzing these evaluation results. In this paper, we present LLM Comparator, a novel visual analytics tool for interactively analyzing results from side-by-side evaluation of LLMs. The tool provides users with interactive workflows to understand when and why a model performs better or worse than a baseline model, and how the responses from two models differ qualitatively. We iteratively designed and developed the tool by closely working with researchers and engineers at a large technology company. Qualitative feedback from users highlights that the tool facilitates in-depth analysis of individual examples while enabling users to visually overview and flexibly slice data. This empowers users to identify undesirable patterns, formulate hypotheses about model behavior, and gain insights for model improvement.","authors":[{"affiliations":["Google, Atlanta, United States"],"email":"minsuk.kahng@gmail.com","is_corresponding":true,"name":"Minsuk Kahng"},{"affiliations":["Google Research, Seattle, United States"],"email":"iftenney@google.com","is_corresponding":false,"name":"Ian Tenney"},{"affiliations":["Google Research, Cambridge, United States"],"email":"mahimap@google.com","is_corresponding":false,"name":"Mahima Pushkarna"},{"affiliations":["Google Research, Pittsburgh, United States"],"email":"lxieyang.cmu@gmail.com","is_corresponding":false,"name":"Michael Xieyang Liu"},{"affiliations":["Google Research, Cambridge, United States"],"email":"jwexler@google.com","is_corresponding":false,"name":"James Wexler"},{"affiliations":["Google, Cambridge, United States"],"email":"ereif@google.com","is_corresponding":false,"name":"Emily Reif"},{"affiliations":["Google Research, Mountain View, United States"],"email":"kallarackal@google.com","is_corresponding":false,"name":"Krystal Kallarackal"},{"affiliations":["Google Research, Seattle, United States"],"email":"minsuk.cs@gmail.com","is_corresponding":false,"name":"Minsuk Chang"},{"affiliations":["Google, Cambridge, United States"],"email":"michaelterry@google.com","is_corresponding":false,"name":"Michael Terry"},{"affiliations":["Google, Paris, France"],"email":"ldixon@google.com","is_corresponding":false,"name":"Lucas Dixon"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1326","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1329","abstract":"The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students' learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students' interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master's level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students' interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT's responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system's effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz's capacity to enhance educators' insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions.","authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"zchendf@connect.ust.hk","is_corresponding":true,"name":"Zixin Chen"},{"affiliations":["The Hong Kong University of Science and Technology, Sai Kung, China"],"email":"csejiachenw@ust.hk","is_corresponding":false,"name":"Jiachen Wang"},{"affiliations":["Texas A","M University, College Station, United States"],"email":"xiameng9355@gmail.com","is_corresponding":false,"name":"Meng Xia"},{"affiliations":["The Hong Kong University of Science and Technology, Kowloon, Hong Kong"],"email":"kshigyo@connect.ust.hk","is_corresponding":false,"name":"Kento Shigyo"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"dliuak@connect.ust.hk","is_corresponding":false,"name":"Dingdong Liu"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"rzhangab@connect.ust.hk","is_corresponding":false,"name":"Rong Zhang"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1329","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1332","abstract":"Translating natural language to visualization (NL2VIS) has shown great promise for visual data analysis, but it remains a challenging task that requires multiple low-level implementations, such as natural language processing and visualization design. Recent advancements in pre-trained large language models (LLMs) are opening new avenues for generating visualizations from natural language. However, the lack of a comprehensive and reliable benchmark hinders our understanding of LLMs\u2019 capabilities in visualization generation. In this paper, we address this gap by proposing a new NL2VIS benchmark called VisEval. Firstly, we introduce a high-quality and large-scale dataset. This dataset includes 2,524 representative queries covering 146 databases, paired with accurately labeled ground truths. Secondly, we advocate for a comprehensive automated evaluation methodology covering multiple dimensions, including validity, legality, and readability. By systematically scanning for potential issues with a number of heterogeneous checkers, VisEval provides reliable and trustworthy evaluation outcomes. We run VisEval on a series of state-of-the-art LLMs. Our evaluation reveals prevalent challenges and delivers essential insights for future advancements.","authors":[{"affiliations":["Microsoft Research, Shanghai, China"],"email":"christy05.chen@gmail.com","is_corresponding":true,"name":"Nan Chen"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"scottyugochang@gmail.com","is_corresponding":false,"name":"Yuge Zhang"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"jiahangxu@microsoft.com","is_corresponding":false,"name":"Jiahang Xu"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"rk.ren@outlook.com","is_corresponding":false,"name":"Kan Ren"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"yuqyang@microsoft.com","is_corresponding":false,"name":"Yuqing Yang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1332","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"VisEval: A Benchmark for Data Visualization in the Era of Large Language Models","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1333","abstract":"Data videos increasingly becoming a popular data storytelling form represented by visual and audio integration. In recent years, more and more researchers have explored many narrative structures for effective and attractive data storytelling. Meanwhile, the Hero's Journey provides a classic narrative framework specific to the Hero's story that has been adopted by various mediums. There are continuous discussions about applying Hero's Journey to data stories. However, so far, little systematic and practical guidance on how to create a data video for a specific story type like the Hero's Journey, as well as how to manipulate its sound and visual designs simultaneously. To fulfill this gap, we first identified 48 data videos aligned with the Hero's Journey as the common storytelling from 109 high-quality data videos. Then, we examined how existing practices apply Hero's Journey for creating data videos. We coded the 48 data videos in terms of the narrative stages, sound design, and visual design according to the Hero's Journey structure. Based on our findings, we proposed a design space to provide practical guidance on the narrative, visual, and sound custom design for different narrative segments of the hero's journey (i.e., Departure, Initiation, Return) through data video creation. To validate our proposed design space, we conducted a user study where 20 participants were invited to design data videos with and without our design space guidance, which was evaluated by two experts. Results show that our design space provides useful and practical guidance for data storytellers effectively creating data videos with the Hero's Journey.","authors":[{"affiliations":["The Hong Kong University of Science and Technology, Guangzhou, China"],"email":"zwei302@connect.hkust-gz.edu.cn","is_corresponding":true,"name":"Zheng Wei"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"xxubq@connect.ust.hk","is_corresponding":false,"name":"Xian Xu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1333","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Telling Data Stories with the Hero\u2019s Journey: Design Guidance for Creating Data Videos","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1342","abstract":"Genomics experts rely on visualization to extract and share insights from complex and large-scale datasets. Beyond off-the-shelf tools for data exploration, there is an increasing need for platforms that aid experts in authoring customized visualizations for both exploration and communication of insights. A variety of interactive techniques have been proposed for authoring data visualizations, such as template editing, shelf configuration, natural language input, and code editors. However, it remains unclear how genomics experts create visualizations and which techniques best support their visualization tasks and needs. To address this gap, we conducted two user studies with genomics researchers: (1) semi-structured interviews (n=20) to identify the tasks, user contexts, and current visualization authoring techniques and (2) an exploratory study (n=13) using visual probes to elicit users\u2019 intents and desired techniques when creating visualizations. Our contributions include (1) a characterization of how visualization authoring is currently utilized in genomics visualization, identifying limitations and benefits in light of common criteria for authoring tools, and (2) generalizable and actionable design implications for genomics visualization authoring tools based on our findings on task- and user-specific usefulness of authoring techniques.","authors":[{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.v.d.brandt@tue.nl","is_corresponding":true,"name":"Astrid van den Brandt"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"sehi_lyi@hms.harvard.edu","is_corresponding":false,"name":"Sehi L'Yi"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"huyen_nguyen@hms.harvard.edu","is_corresponding":false,"name":"Huyen N. Nguyen"},{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.vilanova@tue.nl","is_corresponding":false,"name":"Anna Vilanova"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1342","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1351","abstract":"As basketball\u2019s popularity surges, fans often find themselves confused and overwhelmed by the rapid game pace and complexity. Basketball tactics, involving a complex series of actions, require substantial knowledge to be fully understood. This complexity leads to a need for additional information and explanation, which can distract fans from the game. To tackle these challenges, we present Sportify, a Visual Question Answering system that integrates narratives and embedded visualization for demystifying basketball tactical questions, aiding fans in understanding various game aspects. We propose three novel action visualizations (i.e., Pass, Cut, and Screen) to demonstrate critical action sequences. To explain the reasoning and logic behind players\u2019 actions, we leverage a large-language model (LLM) to generate narratives. We adopt a storytelling approach for complex scenarios from both first and third-person perspectives, integrating action visualizations. We evaluated Sportify with basketball fans to investigate its impact on understanding of tactics, and how different personal perspectives of narratives impact the understanding of complex tactic with action visualizations. Our evaluation with basketball fans demonstrates Sportify\u2019s capability to deepen tactical insights and amplify the viewing experience. Furthermore, third-person narration assists people in getting in-depth game explanations while first-person narration enhances fans\u2019 game engagement.","authors":[{"affiliations":["Harvard University, Allston, United States"],"email":"chungyi347@gmail.com","is_corresponding":true,"name":"Chunggi Lee"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"mlin@g.harvard.edu","is_corresponding":false,"name":"Tica Lin"},{"affiliations":["University of Minnesota-Twin Cities, Minneapolis, United States"],"email":"ztchen@umn.edu","is_corresponding":false,"name":"Chen Zhu-Tian"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"pfister@seas.harvard.edu","is_corresponding":false,"name":"Hanspeter Pfister"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1351","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1363","abstract":"Data visualization aids in making data analysis more intuitive and in-depth, with widespread applications in fields such as biology, finance, and medicine. For massive and continuously growing streaming time series data, these data are typically visualized in the form of line charts, but the data transmission puts significant pressure on the network, leading to visualization lag or even fail to render completely. This paper proposes a universal sampling algorithm FPCS, which retains feature points from continuously received streaming time series data, compensates for the frequent fluctuating feature points, and aims to achieve efficient visualization. This algorithm bridges the gap in sampling for streaming time series data. The algorithm has several advantages: (1) It optimizes the sampling results by compensating for fewer feature points, retaining the visualization features of the original data very well, ensuring high-quality sampled data; (2) The execution time is the shortest compared to similar existing algorithms; (3) It has an almost negligible space overhead; (4) The data sampling process does not depend on the overall data; (5) This algorithm can be applied to infinite streaming data and finite static data.","authors":[{"affiliations":["China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China"],"email":"3271961659@qq.com","is_corresponding":true,"name":"Hongyan Li"},{"affiliations":["China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China"],"email":"ustcboy@outlook.com","is_corresponding":false,"name":"Bo Yang"},{"affiliations":["China Nanhu Academy of Electronics and Information Technology, Jiaxing, China"],"email":"caiyansong@cnaeit.com","is_corresponding":false,"name":"Yansong Chua"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1363","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1368","abstract":"Synthetic Lethal (SL) relationships, although rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there remains a persistent need among domain experts for interpretive paths and mechanism explorations that better harmonize with domain-specific knowledge, particularly due to the significant costs involved in experimentation. To address this gap, we propose an iterative Human-AI collaborative framework comprising two key components: 1)Human-Engaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2)Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids domain experts in organizing and comparing prediction results and interpretive paths across different granularities, thereby uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, thereby enhancing expert involvement and intervention to build trust. This framework, facilitated by SLInterpreter, ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. Subsequently, we evaluate the efficacy of the framework through a case study and expert interviews.","authors":[{"affiliations":["Shanghaitech University, Shanghai, China"],"email":"jianghr2023@shanghaitech.edu.cn","is_corresponding":true,"name":"Haoran Jiang"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"shishh2023@shanghaitech.edu.cn","is_corresponding":false,"name":"Shaohan Shi"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"zhangshh2@shanghaitech.edu.cn","is_corresponding":false,"name":"Shuhao Zhang"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"zhengjie@shanghaitech.edu.cn","is_corresponding":false,"name":"Jie Zheng"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1368","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1391","abstract":"In volume visualization, visualization synthesis has attracted much attention due to its ability to generate novel visualizations without following the conventional rendering pipeline. However, existing solutions based on generative adversarial networks often require many training images and take significant training time. Still, issues such as low quality, consistency, and flexibility persist. This paper introduces StyleRF-VolVis, an innovative style transfer framework for expressive volume visualization (VolVis) via neural radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its ability to accurately separate the underlying scene geometry (i.e., content) and color appearance (i.e., style), conveniently modify color, opacity, and lighting of the original rendering while maintaining visual content consistency across the views, and effectively transfer arbitrary styles from reference images to the reconstructed 3D scene. To achieve these, we design a base NeRF model for scene geometry extraction, a palette color network to classify regions of the radiance field for photorealistic editing, and an unrestricted color network to lift the color palette constraint via knowledge distillation for non-photorealistic editing. We demonstrate the superior quality, consistency, and flexibility of StyleRF-VolVis by experimenting with various volume rendering scenes and reference images and comparing StyleRF-VolVis against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF and SNeRF) style rendering solutions.","authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"ktang2@nd.edu","is_corresponding":true,"name":"Kaiyuan Tang"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1391","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1393","abstract":"This paper discusses challenges and design strategies in responsive design for thematic maps in information visualization. Thematic maps pose a number of unique challenges for responsiveness, such as inflexible aspect ratios that do not easily adapt to varying screen dimensions, or densely clustered visual elements in urban areas becoming illegible at smaller scales. However, design guidance on how to best address these issues is currently lacking. We conducted design sessions with eight professional designers and developers of web-based thematic maps for information visualization. Participants were asked to redesign a given map for various screen sizes and aspect ratios and to describe their reasoning for when and how they adapted the design. We report general observations of practitioners\u2019 motivations, decision-making processes, and personal design frameworks. We then derive seven challenges commonly encountered in responsive map design, and 17 strategies to address them, such as repositioning elements, segmenting the map, or using alternative visualizations. We compile these challenges and strategies into an illustrated cheat sheet targeted at anyone designing or learning to design responsive maps. The cheat sheet is available online: https://responsive-vis.github.io/map-cheat-sheet.","authors":[{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"sarah.schoettler@ed.ac.uk","is_corresponding":true,"name":"Sarah Sch\u00f6ttler"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"uhinrich@ed.ac.uk","is_corresponding":false,"name":"Uta Hinrichs"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1393","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1394","abstract":"This paper presents discursive patinas, a technique to visualize discussions onto data visualizations, inspired by how people leave traces in the physical world. While data visualizations are widely discussed in online communities and social media, comments tend to be displayed separately from the visualization. We lack ways to relate these discussions to the content of the visualization, e.g., to situate comments, explain visual patterns, or question assumptions. In our visualization annotation interface, users can designate areas within the visualization to, e.g., highlight specific visual marks (anchors), attach textual comments, and add category labels, likes, and replies. By coloring and styling these designated areas, a meta visualization emerges, showing what and where people comment and annotate. These patinas show regions of heavy discussions, recent commenting activity, and the distribution of questions, suggestions, or personal stories. To study how people use anchors to discuss visualizations and understand if and how information in patinas influence people's understanding of the discussion, we ran workshops with 90 participants including students, domain experts, and visualization researchers. Our results show that discursive patinas improve the ability to navigate discussions and guide people to comments that help understand, contextualize, or scrutinize the visualization. We discuss the potential of the technique to support discursive engagements, including critical readings of visualizations, design feedback, and feminist approaches to data visualization.","authors":[{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom","Potsdam University of Applied Sciences, Potsdam, Germany"],"email":"tobias.kauer@fh-potsdam.de","is_corresponding":true,"name":"Tobias Kauer"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"derya.akbaba@liu.se","is_corresponding":false,"name":"Derya Akbaba"},{"affiliations":["University of Applied Sciences Potsdam, Potsdam, Germany"],"email":"doerk@fh-potsdam.de","is_corresponding":false,"name":"Marian D\u00f6rk"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1394","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Discursive Patinas: Anchoring Discussions in Data Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1395","abstract":"Onboarding a user to a visualization dashboard entails explaining its various components, including the chart types used, the data loaded, and the interactions provided. Authoring such an onboarding experience is time-consuming and requires significant knowledge, and little guidance exists on how best to do this. End-users being onboarded to a new dashboard can be either confused and overwhelmed, or disinterested and disengaged, depending on the user\u2019s expertise. We propose interactive dashboard tours (d-tours) as semi-automated onboarding experiences for variable user expertise that preserve the user\u2019s agency, interest, and engagement. Our interactive tours concept draws from open-world game design to give the user freedom in choosing their path in the onboarding. We have implemented the concept in a tool called D-TOUR PROTOTYPE that allows authors to craft custom and interactive dashboard tours from scratch or using automatic templates. Automatically generated tours can still be customized to use different media (such as video, audio, or highlighting) or new narratives to produce a tailored onboarding experience for individual users or groups. We demonstrate the usefulness of interactive dashboard tours through use cases and expert interviews. The evaluation shows that the authors find the automation in the DTour prototype helpful and time-saving and the users find it engaging and intuitive. This paper and all supplemental materials are available at \\url{https://osf.io/6fbjp/}.","authors":[{"affiliations":["Pro2Future GmbH, Linz, Austria","Johannes Kepler University, Linz, Austria"],"email":"vaishali.dhanoa@pro2future.at","is_corresponding":true,"name":"Vaishali Dhanoa"},{"affiliations":["Johannes Kepler University, Linz, Austria"],"email":"andreas.hinterreiter@jku.at","is_corresponding":false,"name":"Andreas Hinterreiter"},{"affiliations":["Johannes Kepler University, Linz, Austria"],"email":"vanessa.fediuk@jku.at","is_corresponding":false,"name":"Vanessa Fediuk"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"},{"affiliations":["Institute of Visual Computing "," Human-Centered Technology, Vienna, Austria"],"email":"groeller@cg.tuwien.ac.at","is_corresponding":false,"name":"Eduard Gr\u00f6ller"},{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"marc.streit@jku.at","is_corresponding":false,"name":"Marc Streit"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1395","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1414","abstract":"Visualization designers often rely on examples to explore the space of possible designs, yet we have little insight into how examples shape data visualization design outcomes. While the effects of examples have been studied in other disciplines, such as web design or engineering, the results are not readily applicable to visualization design due to inconsistencies in findings and challenges unique to visualization design. Towards bridging this gap, we conduct an exploratory experiment involving 32 data visualization designers focusing on the influence of five factors (timing, quantity, diversity, data topic similarity, and data schema similarity) on objectively measurable design outcomes (e.g., numbers of designs and idea transfers). Our quantitative analysis shows that when examples are introduced after initial brainstorming, designers curate examples with topics less similar to the dataset they are working on and produce more designs with a high variation in visualization components. Also, designers copy more ideas from examples with higher data schema similarities. Our qualitative analysis of participants\u2019 thought processes provides insights into why designers incorporate examples into their designs, revealing potential factors that have not been previously investigated. Finally, we discuss how our results inform future work on quantifying designs, improving measures of effectiveness, and supporting example-based visualization design. All supplementary materials are available at https://osf.io/sbp2k/?view_only=ca14af497f5845a0b1b2c616699fefc5","authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"hbako@umd.edu","is_corresponding":true,"name":"Hannah K. Bako"},{"affiliations":["The University of Texas at Austin, Austin, United States"],"email":"xinyi.liu@utexas.edu","is_corresponding":false,"name":"Xinyi Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"gko1@terpmail.umd.edu","is_corresponding":false,"name":"Grace Ko"},{"affiliations":["Human Data Interaction Lab, College Park, United States"],"email":"hsong02@cs.umd.edu","is_corresponding":false,"name":"Hyemi Song"},{"affiliations":["University of Washington, Seattle, United States"],"email":"leibatt@cs.washington.edu","is_corresponding":false,"name":"Leilani Battle"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1414","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Unveiling How Examples Shape Data Visualization Design Outcomes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1416","abstract":"Various data visualization downstream applications such as reverse engineering and interactive authoring require a vocabulary that describes the structure of visualization scenes and the procedure to manipulate them. A few scene abstractions have been proposed, but they are restricted to specific applications for a limited set of visualization types. A unified and expressive model of data visualization scenes for different downstream applications has been missing. To fill this gap, we present Manipulable Semantic Components (MSC), a computational representation of data visualization scenes, to support applications in scene understanding and augmentation. MSC consists of two parts: a unified object model describing the structure of a visualization scene in terms of semantic components, and a set of operations to generate and modify the scene components. We demonstrate the benefits of MSC in three applications: visualization authoring, visualization deconstruction and reuse, and animation specification.","authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":true,"name":"Zhicheng Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"cchen24@umd.edu","is_corresponding":false,"name":"Chen Chen"},{"affiliations":["University of Maryland, College Park, United States"],"email":"hookerj100@gmail.com","is_corresponding":false,"name":"John Hooker"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1416","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1422","abstract":"Visualization items\u2014factual questions about visualizations that ask viewers to accomplish visualization tasks\u2014are regularly used in the field of information visualization as educational and evaluative materials. For example, researchers of visualization literacy require large, diverse banks of items to conduct studies where the same skill is measured repeatedly on the same participants. Yet, generating a large number of high-quality, diverse items requires significant time and expertise. To address the critical need for a large number of diverse visualization items in education and research, this paper investigates the potential for large language models (LLMs) to automate the generation of multiple-choice visualization items. Through an iterative design process, we develop an LLM-based pipeline, the VILA (Visualization Items Generated by Large LAnguage Models) pipeline, for efficiently generating visualization items that measure people\u2019s ability to accomplish visualization tasks. We use the VILA pipeline to generate 1,404 candidate items across 12 chart types and 13 visualization tasks. In collaboration with 11 visualization experts, we develop an evaluation rulebook which we then use to rate the quality of all candidate items. The result is a final bank, the VILA bank, of \u223c1,100 items. From this evaluation, we also identify and classify current limitations of LLMs in generating visualization items, and discuss the role of human oversight in ensuring quality. In addition, we demonstrate an application of our work by creating a visualization literacy test, VILA-VLAT, which measures people\u2019s ability to complete a diverse set of tasks on various types of visualizations; to show the potential of this application, we assess the convergent validity of VILA-VLAT by comparing it to the existing test VLAT via an online study (R = 0.70). Lastly, we discuss the application areas of the VILA pipeline and the VILA bank and provide practical recommendations for their use. All supplemental materials are available at https://osf.io/ysrhq/?view_only=e31b3ddf216e4351bb37bcedf744e9d6.","authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"yuancui2025@u.northwestern.edu","is_corresponding":true,"name":"Yuan Cui"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"wanqian.ge@northwestern.edu","is_corresponding":false,"name":"Lily W. Ge"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"yding5@wpi.edu","is_corresponding":false,"name":"Yiren Ding"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ltharrison@wpi.edu","is_corresponding":false,"name":"Lane Harrison"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"fumeng.p.yang@gmail.com","is_corresponding":false,"name":"Fumeng Yang"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1422","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Promises and Pitfalls: Using Large Language Models to Generate Visualization Items","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1425","abstract":"Comics have been shown to be an effective method for sequential data-driven storytelling, especially for dynamic graphs that change over time. However, manually creating a data-driven comic for a dynamic graph is currently time-consuming, complex, and error-prone. In this paper, we propose DG Comics, a novel comic authoring tool for dynamic graphs that allows users to semi-automatically build the comic and annotate it. The tool uses a hierarchical clustering algorithm that we newly developed for segmenting consecutive snapshots of the dynamic graph while preserving their chronological order. It also provides rich information on both individuals and communities extracted from dynamic graphs in multiple views, where users can explore dynamic graphs and choose what to tell in comics. For evaluation, we provide an example and report results from a user study and expert review.","authors":[{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"joohee@unist.ac.kr","is_corresponding":true,"name":"Joohee Kim"},{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"gusdnr0916@unist.ac.kr","is_corresponding":false,"name":"Hyunwook Lee"},{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"ducnm@unist.ac.kr","is_corresponding":false,"name":"Duc M. Nguyen"},{"affiliations":["Australian National University, Canberra, Australia"],"email":"minjeong.shin@anu.edu.au","is_corresponding":false,"name":"Minjeong Shin"},{"affiliations":["IBM Research, Cambridge, United States"],"email":"bumchul.kwon@us.ibm.com","is_corresponding":false,"name":"Bum Chul Kwon"},{"affiliations":["UNIST, Ulsan, Korea, Republic of"],"email":"sako@unist.ac.kr","is_corresponding":false,"name":"Sungahn Ko"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1425","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1427","abstract":"Numerical simulation serves as a cornerstone in scientific modeling, yet the process of fine-tuning simulation parameters poses significant challenges. Conventionally, parameter adjustment relies on extensive numerical simulations, data analysis, and expert insights, resulting in substantial computational costs and low efficiency. The emergence of deep learning in recent years has provided promising avenues for more efficient exploration of parameter spaces. However, existing approaches often lack intuitive methods for precise parameter adjustment and optimization. To tackle these challenges, we introduce ParamsDrag, a model that facilitates parameter space exploration through direct interaction with visualizations. Inspired by DragGAN, our ParamsDrag model operates in three steps. First, the generative component of ParamsDrag generates visualizations based on the input simulation parameters. Second, by directly dragging structure-related features in the visualizations, users can intuitively understand the controlling effect of different parameters. Third, with the understanding from the earlier step, users can steer ParamsDrag to produce dynamic visual outcomes. Through experiments conducted on real-world simulations and comparisons with state-of-the-art deep learning based approaches, we demonstrate the efficacy of our solution.","authors":[{"affiliations":["Computer Network Information Center, Chinese Academy of Sciences, Beijing, China","University of Chinese Academy of Sciences, Beijing, China"],"email":"liguan@sccas.cn","is_corresponding":true,"name":"Guan Li"},{"affiliations":["Beijing Forestry University, Beijing, China"],"email":"leo_edumail@163.com","is_corresponding":false,"name":"Yang Liu"},{"affiliations":["Computer Network Information Center, Chinese Academy of Sciences, Beijing, China"],"email":"sgh@sccas.cn","is_corresponding":false,"name":"Guihua Shan"},{"affiliations":["Chinese Academy of Sciences, Beijing, China"],"email":"chengshiyu@cnic.cn","is_corresponding":false,"name":"Shiyu Cheng"},{"affiliations":["Beijing Forestry University, Beijing, China"],"email":"weiqun.cao@126.com","is_corresponding":false,"name":"Weiqun Cao"},{"affiliations":["Visa Research, Palo Alto, United States"],"email":"junpeng.wang.nk@gmail.com","is_corresponding":false,"name":"Junpeng Wang"},{"affiliations":["National Taiwan Normal University, Taipei City, Taiwan"],"email":"caseywang777@gmail.com","is_corresponding":false,"name":"Ko-Chih Wang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1427","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1438","abstract":"Differential privacy ensures the security of individual privacy but poses challenges to data exploration processes because the limited privacy budget incapacitates the flexibility of exploration and the noisy feedback of data requests leads to confusing uncertainty. In this study, we take the lead in describing corresponding exploration scenarios, including underlying requirements and available exploration strategies. To facilitate practical applications, we propose a visual analysis approach to the formulation of exploration strategies. Our approach applies a reinforcement learning model to provide diverse suggestions for exploration strategies according to the exploration intent of users. A novel visual design for representing uncertainty in correlation patterns is integrated into our prototype system to support the proposed approach. Finally, we implemented a user study and two case studies. The results of these studies verified that our approach can help develop strategies that satisfy the exploration intent of users.","authors":[{"affiliations":["Nankai University, Tianjin, China"],"email":"wangxumeng@nankai.edu.cn","is_corresponding":true,"name":"Xumeng Wang"},{"affiliations":["Nankai University, Tianjin, China"],"email":"jiaoshuangcheng@mail.nankai.edu.cn","is_corresponding":false,"name":"Shuangcheng Jiao"},{"affiliations":["Arizona State University, Tempe, United States"],"email":"cbryan16@asu.edu","is_corresponding":false,"name":"Chris Bryan"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1438","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1446","abstract":"We are currently witnessing an increase in web-based, data-driven initiatives that explain complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. Many of these projects call themselves \"atlases\", a term that historically referred to collections of maps or scientific illustrations. To answer the question of what makes a \"visualization atlas\", we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of visualization atlases as an emerging format to present complex topics in a holistic, data-driven, and curated way through visualization, (2) a set of design patterns and design dimensions that led to (3) defining 5 visualization atlas genres, and (4) insights into the atlas creation from interviews. We found that visualization atlases are unique in that they combine exploratory visualization with narrative elements from data-driven storytelling and structured navigation mechanisms. They can act as a reference, communication or discovery tools targeting a wide range of audiences with different levels of domain knowledge. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases.","authors":[{"affiliations":["The University of Edinburgh, Edinburgh, United Kingdom"],"email":"jinrui.w@outlook.com","is_corresponding":true,"name":"Jinrui Wang"},{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"uhinrich@ed.ac.uk","is_corresponding":false,"name":"Uta Hinrichs"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1446","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1451","abstract":"We present a systematic review, an empirical study, and a first set of considerations for designing visualizations in motion, derived from a concrete scenario in which these visualizations were used to support a primary task. In practice, when viewers are confronted with embedded visualizations, they often have to focus on a primary task and can only quickly glance at a visualization showing rich, often dynamically updated, information. As such, the visualizations must be designed so as not to distract from the primary task, while at the same time being readable and useful for aiding the primary task. For example, in games, players who are engaged in a battle have to look at their enemies but also read the remaining health of their own game character from the health bar over their character's head. Many trade-offs are possible in the design of embedded visualizations in such dynamic scenarios, which we explore in-depth in this paper with a focus on user experience. We use video games as an example of an application context with a rich existing set of visualizations in motion. We begin our work with a systematic review of in-game visualizations in motion. Next, we conduct an empirical user study to investigate how different embedded visualizations in motion designs impact user experience. We conclude with a set of considerations and trade-offs for designing visualizations in motion more broadly as derived from what we learned about video games. All supplemental materials of this paper are available at osf.io/3v8wm/.","authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"yaolijie0219@gmail.com","is_corresponding":true,"name":"Lijie Yao"},{"affiliations":["Univerisit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"federicabucchieri@gmail.com","is_corresponding":false,"name":"Federica Bucchieri"},{"affiliations":["Carleton University, Ottawa, Canada"],"email":"dieselfish@gmail.com","is_corresponding":false,"name":"Victoria McArthur"},{"affiliations":["LISN, Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"anastasia.bezerianos@universite-paris-saclay.fr","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1451","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"User Experience of Visualizations in Motion: A Case Study and Design Considerations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1461","abstract":"This paper presents a practical approach for the optimization of topological simplification, a central pre-processing step for the analysis and visualization of scalar data. Given an input scalar field f and a set of \u201csignal\u201d persistence pairs to maintain, our approaches produces an output field g that is close to f and which optimizes (i) the cancellation of \u201cnon-signal\u201d pairs, while (ii) preserving the \u201csignal\u201d pairs. In contrast to pre-existing simplification approaches, our method is not restricted to persistence pairs involving extrema and can thus address a larger class of topological features, in particular saddle pairs in three-dimensional scalar data. Our approach leverages recent generic persistence optimization frameworks and extends them with tailored accelerations specific to the problem of topological simplification. Extensive experiments report substantial accelerations over these frameworks, thereby making topological simplification optimization practical for real-life datasets. Our work enables a direct visualization and analysis of the topologically simplified data, e.g., via isosurfaces of simplified topology (fewer components and handles). We apply our approach to the extraction of prominent filament structures in three-dimensional data. Specifically, we show that our pre-simplification of the data leads to practical improvements over standard topological techniques for removing filament loops. We also show how our framework can be used to repair genus defects in surface processing. Finally, we provide a C++ implementation for reproducibility purposes.","authors":[{"affiliations":["CNRS, Paris, France","SORBONNE UNIVERSITE, Paris, France"],"email":"mohamed.kissi@lip6.fr","is_corresponding":true,"name":"Mohamed KISSI"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"mathieu.pont@lip6.fr","is_corresponding":false,"name":"Mathieu Pont"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"josh@cs.arizona.edu","is_corresponding":false,"name":"Joshua A Levine"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1461","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"A Practical Solver for Scalar Data Topological Simplification","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1472","abstract":"Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, an approach for extracting and modeling visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines---DracoGPT-Rank and DracoGPT-Recommend---to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT models the preferences expressed by LLMs well, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantively diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and serve as a reliable and cost-effective stand-in for LLMs.","authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"wwill@cs.washington.edu","is_corresponding":true,"name":"Huichen Will Wang"},{"affiliations":["University of Washington, Seattle, United States"],"email":"mgord@cs.stanford.edu","is_corresponding":false,"name":"Mitchell L. Gordon"},{"affiliations":["University of Washington, Seattle, United States"],"email":"leibatt@cs.washington.edu","is_corresponding":false,"name":"Leilani Battle"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1472","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DracoGPT: Extracting Visualization Design Preferences from Large Language Models","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1474","abstract":"Recent advancements in Large Language Models (LLMs) and Prompt Engineering have made chatbot customization more accessible, significantly reducing barriers to tasks that previously required programming skills. However, prompt evaluation, especially at the dataset scale, remains complex due to the need to assess prompts across thousands of test instances within a dataset. Our study, based on a comprehensive literature review and pilot study, summarized five critical challenges in prompt evaluation. In response, we introduce a feature-oriented workflow for systematic prompt evaluation, focusing on text summarization. Our workflow advocates feature metrics such as complexity, formality, or naturalness, instead of using traditional quality metrics like ROUGE. This design choice enables a more user-friendly evaluation of prompts, as it guides users in sorting through the ambiguity inherent in natural language. To support this workflow, we introduce Awesum, a visual analytics system that facilitates identifying optimal prompt refinements through interactive visualizations, featuring a novel Prompt Comparator design that employs a BubbleSet-inspired design enhanced by dimensionality reduction techniques. We evaluate the effectiveness and general applicability of the system with practitioners from various domains and found that (1) our design helps overcome the learning curve for non-technical people to conduct a systematic evaluation, and (2) our feature-oriented workflow has the potential to generalize to other NLG and image-generation tasks. For future works, we advocate moving towards feature-oriented evaluation of LLM prompts and discuss unsolved challenges in terms of human-agent interaction.","authors":[{"affiliations":["University of California Davis, Davis, United States"],"email":"ytlee@ucdavis.edu","is_corresponding":true,"name":"Sam Yu-Te Lee"},{"affiliations":["University of California, Davis, Davis, United States"],"email":"abahukhandi@ucdavis.edu","is_corresponding":false,"name":"Aryaman Bahukhandi"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"dyuliu@ucdavis.edu","is_corresponding":false,"name":"Dongyu Liu"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"ma@cs.ucdavis.edu","is_corresponding":false,"name":"Kwan-Liu Ma"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1474","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1480","abstract":"We propose the notion of Attention-aware Visualizations (AAVs) that track the user's perception of a visual representation over time and feed this information back to the visualization.This idea is particularly useful for ubiquitous and immersive analytics where knowing which embedded visualizations the user is looking at can be used to make visualizations react appropriately to the user's attention: for example, by highlighting data the user has not yet seen. We can separate the approach into three components: (1) measuring the user's gaze on a visualization and its parts; (2) tracking the user's attention over time; and (3) reactively modifying the visual representation based on the current attention metric. In this paper, we present two separate implementations of AAV: a 2D numeric integration of attention for web-based visualizations that can use an embodied eye-tracker to capture the user's gaze, and a 3D implementation that uses the stencil buffer to track the visibility of each individual mark in a visualization. Both methods provide similar mechanisms for accumulating attention over time and changing the appearance of marks in response. We also present results from a controlled laboratory experiment studying different visual feedback mechanisms for attention.","authors":[{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"arvind@cs.au.dk","is_corresponding":true,"name":"Arvind Srinivasan"},{"affiliations":["Aarhus University, Aarhus N, Denmark"],"email":"johannes@ellemose.eu","is_corresponding":false,"name":"Johannes Ellemose"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.butcher@bangor.ac.uk","is_corresponding":false,"name":"Peter W. S. Butcher"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.ritsos@bangor.ac.uk","is_corresponding":false,"name":"Panagiotis D. Ritsos"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1480","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Attention-Aware Visualization: Tracking and Responding to User Perception Over Time","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1483","abstract":"Egocentric networks, often visualized as node-link diagrams, portray the complex relationship (link) dynamics between an entity (node) and others. However, common analytics tasks are multifaceted, encompassing interactions among four key aspects: strength, function, structure, and content. Current node-link visualization designs may fall short, focusing narrowly on certain aspects and neglecting the holistic, dynamic nature of egocentric networks. To bridge this gap, we introduce SpreadLine, a novel visualization framework designed to enable the visual exploration of egocentric networks from these four aspects at the microscopic level. Leveraging the intuitive appeal of storyline visualizations, SpreadLine adopts a storyline-based design to represent entities and their evolving relationships. We further encode essential topological information in the layout and condense the contextual information in a metro map metaphor, allowing for a more engaging and effective way to explore temporal and attribute-based information. To guide our work, with a thorough review of pertinent literature, we have distilled a task taxonomy that addresses the analytical needs specific to egocentric network exploration. Acknowledging the diverse analytical requirements of users, SpreadLine offers customizable encodings to enable users to tailor the framework for their tasks. We demonstrate the efficacy and general applicability of SpreadLine through three diverse real-world case studies and a usability study.","authors":[{"affiliations":["University of California, Davis, Davis, United States"],"email":"yskuo@ucdavis.edu","is_corresponding":true,"name":"Yun-Hsin Kuo"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"dyuliu@ucdavis.edu","is_corresponding":false,"name":"Dongyu Liu"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"ma@cs.ucdavis.edu","is_corresponding":false,"name":"Kwan-Liu Ma"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1483","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"SpreadLine: Visualizing Egocentric Dynamic Influence","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1487","abstract":"Referential gestures, or as termed in linguistics, {\\em deixis}, are an essential part of communication around data visualizations. Despite their importance, such gestures are often overlooked when documenting data analysis meetings. Transcripts, for instance, fail to capture gestures, and video recordings may not adequately capture or emphasize them. We introduce a novel method for documenting collaborative data meetings that treats deixis as a first-class citizen. Our proposed framework captures cursor-based gestural data along with audio and converts them into interactive documents. The framework leverages a large language model to identify word correspondences with gestures. These identified references are used to create context-based annotations in the resulting interactive document. We assess the effectiveness of our proposed method through a user study, finding that participants preferred our automated interactive documentation over recordings, transcripts, and manual note-taking. Furthermore, we derive a preliminary taxonomy of cursor-based deictic gestures from participant actions during the study. This taxonomy offers further opportunities for better utilizing cursor-based deixis in collaborative data analysis scenarios.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"hatch.on27@gmail.com","is_corresponding":true,"name":"Chang Han"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1487","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1488","abstract":"A year ago, we submitted an IEEE VIS paper entitled \u201cSwaying the Public? Impacts of Election Forecast Visualizations on Emotion, Trust, and Intention in the 2022 U.S. Midterms\u201d [68], which was later bestowed with the honor of a best paper award. Yet, studying such a complex phenomenon required us to explore many more design paths than we could count, and certainly more than we could document in a single paper. This paper, then, is the unwritten prequel\u2014the backstory. It chronicles our journey from a simple idea\u2014to study visualizations for election forecasts\u2014through obstacles such as developing meaningfully different, easy-to-understand forecast visualizations, crafting professional-looking forecasts, and grappling with how to study perceptions of the forecasts before, during, and after the 2022 U.S. midterm elections. Our backstory began with developing a design space for two-party election forecasts, de\ufb01ning dimensions such as data transformations, visual channels, layouts, and types of animated narratives. We then qualitatively evaluated ten representative prototypes in this design space through interviews with 13 participants. The interviews yielded invaluable insights into how people interpret uncertainty visualizations and reason about probability in a U.S. election context, such as confounding win probability with vote share and erroneously forming connections between concrete visual representations (like dots) and real-world entities (like votes). Informed by these insights, we revised our prototypes to address ambiguity in interpreting visual encodings, particularly through the inclusion of extensive annotations. As we navigated these design paths, we contributed a design space and insights that may help others when designing uncertainty visualizations. We also hope that our design lessons and research process can inspire the research community when exploring topics related to designing visualizations for the general public.","authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"fumeng.p.yang@gmail.com","is_corresponding":true,"name":"Fumeng Yang"},{"affiliations":["Northwestern University, Evanston, United States","Northwestern University, Evanston, United States"],"email":"mandicai2028@u.northwestern.edu","is_corresponding":false,"name":"Mandi Cai"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"chloemortenson2026@u.northwestern.edu","is_corresponding":false,"name":"Chloe Rose Mortenson"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"hoda@u.northwestern.edu","is_corresponding":false,"name":"Hoda Fakhari"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"aysedlokmanoglu@gmail.com","is_corresponding":false,"name":"Ayse Deniz Lokmanoglu"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"nicholas.diakopoulos@gmail.com","is_corresponding":false,"name":"Nicholas Diakopoulos"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"erik.nisbet@northwestern.edu","is_corresponding":false,"name":"Erik Nisbet"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1488","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"The Backstory to \u201cSwaying the Public\u201d: A Design Chronicle of Election Forecast Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1489","abstract":"Projecting high-dimensional vectors into two dimensions for visualization, known as embedding visualization, facilitates perceptual reasoning and interpretation. Comparison of multiple embedding visualizations drives decision-making in many domains, but conventional comparison methods are limited by a reliance on direct point correspondences. This requirement precludes embedding comparisons without point correspondences, such as two different datasets of annotated images, and fails to capture meaningful higher-level relationships among point groups. To address these shortcomings, we propose a general framework to compare embedding visualizations based on shared class labels rather than individual points. Our approach partitions points into regions corresponding to three key class concepts---confusion, neighborhood, and relative size---to characterize intra- and inter-class relationships. Informed by a preliminary user study, we realize an implementation of our framework using perceptual neighborhood graphs to define these regions and introduce metrics to quantify each concept. We demonstrate the generality of our framework with use cases from machine learning and single-cell biology, highlighting our metrics' ability to surface insightful comparisons across label hierarchies. To assess the effectiveness of our approach, we conducted a user study with five machine learning researchers and six single-cell biologists using an interactive and scalable prototype developed in Python and Rust. Our metrics enable more structured comparison through visual guidance and increased participants\u2019 confidence in their findings.","authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"trevor_manz@g.harvard.edu","is_corresponding":true,"name":"Trevor Manz"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"f.lekschas@gmail.com","is_corresponding":false,"name":"Fritz Lekschas"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"palmergreene@gmail.com","is_corresponding":false,"name":"Evan Greene"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"greg@ozette.com","is_corresponding":false,"name":"Greg Finak"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1489","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1494","abstract":"Topological abstractions offer a method to summarize the behavior of vector fields, but computing them robustly can be challenging due to numerical precision issues. One alternative is to represent the vector field using a discrete approach, which constructs a collection of pairs of simplices in the input mesh that satisfies criteria introduced by Forman\u2019s discrete Morse theory. While numerous approaches exist to compute pairs in the restricted case of the gradient of a scalar field, state-of-the-art algorithms for the general case of vector fields require expensive optimization procedures. This paper introduces a fast, novel approach for pairing simplices of two-dimensional, triangulated vector fields that do not vary in time. The key insight of our approach is that we can employ a local evaluation, inspired by the approach used to construct a discrete gradient field, where every cell in a mesh is considered by no more than one of its vertices. Specifically, we observe that for any edge in the input mesh, we can uniquely assign an outward direction of flow. We can further expand this consistent notion of outward flow at each vertex, which corresponds to the concept of a downhill flow in the case of scalar fields. Working with outward flow enables a linear-time algorithm that processes the (outward) neighborhoods of each vertex one-by-one, similar to the approach used for scalar fields. We couple our approach to constructing discrete vector fields with a method to extract, simplify, and visualize topological features. Empirical results on analytic and simulation data demonstrate drastic improvements in running time, produce features similar to the current state-of-the-art, and show the application of simplification to large, complex flows.","authors":[{"affiliations":["University of Arizona, Tucson, United States"],"email":"finkent@arizona.edu","is_corresponding":true,"name":"Tanner Finken"},{"affiliations":["Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"josh@cs.arizona.edu","is_corresponding":false,"name":"Joshua A Levine"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1494","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Localized Evaluation for Constructing Discrete Vector Fields","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1500","abstract":"Haptic feedback provides an essential sensory stimulus crucial for interacting and analyzing three-dimensional spatio-temporal phenomena on surface visualizations. Given its ability to provide enhanced spatial perception and scene maneuverability, virtual reality (VR) catalyzes haptic interactions on surface visualizations. Various interaction modes, encompassing both mid-air and on-surface interactions---with or without the application of assisting force stimuli---have been explored using haptic force feedback devices. In this paper, we evaluate the use of on-surface and assisted on-surface haptic modes of interaction compared to a no-haptic interaction mode. A force-based haptic stylus is used for all three modalities; the on-surface mode uses collision based forces, whereas the assisted on-surface mode is accompanied by an additional snapping force. We conducted a within-subjects user study involving fundamental interaction tasks performed on surface visualizations. Keeping a consistent visual design across all three modes, our study incorporates tasks that require the localization of the highest, lowest, and random points on surfaces; and tasks that focus on brushing curves on surfaces with varying complexity and occlusion levels. Our findings show that participants took almost the same time to brush curves using all the interaction modes. They could draw smoother curves using the on-surface interaction modes compared to the no-haptic mode. However, the assisted on-surface mode provided better accuracy than the on-surface mode. The on-surface mode was slower in point localization, but the accuracy depended on the visual cues and occlusions associated with the tasks. Finally, we discuss participant feedback on using haptic force feedback as a tangible input modality and share takeaways to aid the design of haptics-based tangible interactions for surface visualizations.","authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"hamza.afzaal@ucalgary.ca","is_corresponding":true,"name":"Hamza Afzaal"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"ualim@ucalgary.ca","is_corresponding":false,"name":"Usman Alim"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1500","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1502","abstract":"Visualization is widely used for exploring personal data, but many visualization authoring systems do not support expressing data in flexible, personal, and organic layouts. Sketching is an accessible tool for experimenting with visualization designs, but formalizing sketched elements into structured data representations is difficult, as modifying hand-drawn glyphs to encode data when available is labour-intensive and error prone. We propose an approach where authors structure their own expressive templates, capturing implicit style as well as explicit data mappings, through sketching a representative visualization for an envisioned or partial dataset. Our approach seeks to support freeform exploration and partial specification, balanced against interactive machine support for specifying the generative procedural rules. We implement this approach in DataGarden, a system designed to support hierarchical data visualizations, and evaluate it with 12 participants in a reproduction study and four experts in a freeform creative task. Participants readily picked up the core idea of template authoring, and the variety of workflows we observed highlight how this process serves design and data ideation as well as visual constraint iteration. We discuss challenges in implementing the design considerations underpinning DataGarden, and illustrate its potential in a gallery of visualizations generated from authored templates.","authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, Orsay, France"],"email":"anna.offenwanger@gmail.com","is_corresponding":true,"name":"Anna Offenwanger"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Inria, LISN, Orsay, France"],"email":"theophanis.tsandilas@inria.fr","is_corresponding":false,"name":"Theophanis Tsandilas"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":false,"name":"Fanny Chevalier"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1502","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DataGarden: Formalizing Personal Sketches into Structured Visualization Templates","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1503","abstract":"The increasing reliance on Large Language Models (LLMs) for health information seeking can pose severe risks due to the potential for misinformation and the complexity of these topics. This paper introduces KnowNet, a visualization system that integrates LLMs with Knowledge Graphs (KG) to provide enhanced accuracy and structured exploration. One core idea in KnowNet is to conceptualize the understanding of a subject as the gradual construction of graph visualization, aligning the user's cognitive process with both the structured data in KGs and the unstructured outputs from LLMs. Specifically, we extracted triples (e.g., entities and their relations) from LLM outputs and mapped them into the validated information and supported evidence in external KGs. Based on the neighborhood of the currently explored entities in KGs, KnowNet provides recommendations for further inquiry, aiming to guide a comprehensive understanding without overlooking critical aspects. A progressive graph visualization is proposed to show the alignment between LLMs and KGs, track previous inquiries, and connect this history with current queries and next-step recommendations. We demonstrate the effectiveness of our system via use cases and expert interviews.","authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"yan00111@umn.edu","is_corresponding":false,"name":"Youfu Yan"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"hou00127@umn.edu","is_corresponding":false,"name":"Yu Hou"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"xiao0290@umn.edu","is_corresponding":false,"name":"Yongkang Xiao"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"zhan1386@umn.edu","is_corresponding":false,"name":"Rui Zhang"},{"affiliations":["University of Minnesota, Minneapolis , United States"],"email":"qianwen@umn.edu","is_corresponding":true,"name":"Qianwen Wang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1503","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1504","abstract":"A wide range of visualization authoring interfaces enable the creation of highly customized visualizations. However, prioritizing expressiveness often impedes the learnability of the authoring interface. The diversity of users, such as varying computational skills and prior experiences in user interfaces, makes it even more challenging for a single authoring interface to satisfy the needs of a broad audience. In this paper, we introduce a framework to balance learnability and expressivity in a visualization authoring system. Adopting insights from learnability studies, such as multimodal interaction and visualization literacy, we explore the design space of blending multiple visualization authoring interfaces for supporting authoring tasks in a complementary and flexible manner. To evaluate the effectiveness of blending interfaces, we implemented a proof-of-concept system, Blace, that combines four common visualization authoring interfaces\u2014template-based, shelf configuration, natural language, and code editor\u2014that are tightly linked to one another to help users easily relate unfamiliar interfaces to more familiar ones. Using the system, we conducted a user study with 12 domain experts who regularly visualize genomics data as part of their analysis workflow. Participants with varied visualization and programming backgrounds were able to successfully reproduce complex visualization examples without a guided tutorial in the study. Feedback from a post-study qualitative questionnaire further suggests that blending interfaces enabled participants to learn the system easily and assisted them in confidently editing unfamiliar visualization grammar in the code editor, enabling expressive customization. Reflecting on our study results and the design of our system, we discuss the different interaction patterns that we identified and design implications for blending visualization authoring interfaces.","authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"sehi_lyi@hms.harvard.edu","is_corresponding":true,"name":"Sehi L'Yi"},{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.v.d.brandt@tue.nl","is_corresponding":false,"name":"Astrid van den Brandt"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"etowah_adams@hms.harvard.edu","is_corresponding":false,"name":"Etowah Adams"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"huyen_nguyen@hms.harvard.edu","is_corresponding":false,"name":"Huyen N. Nguyen"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1504","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Learnable and Expressive Visualization Authoring Through Blended Interfaces","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1522","abstract":"Despite the recent surge of research efforts to make data visualizations accessible to people who are blind or have low-vision (BLV), how to support BLV people's data analysis remains an important and challenging question. As refreshable tactile displays (RTDs) become cheaper and conversational agents continue to improve, their combination provides a promising approach to support BLV people's interactive data analysis. To understand how BLV people would use and react to a system combining an RTD with a conversational agent, we conducted a Wizard-of-Oz study with 11 BLV participants involving line graphs, bar charts, and isarithmic maps. From an analysis of participant interactions, we identified nine distinct patterns and learned that the choice of modalities depended on the type of task and prior experience with tactile graphics. We also found that participants strongly preferred the combination of RTD and speech to a single modality, and that participants with more tactile experience described how tactile images facilitated deeper engagement with the data and supported independent interpretation. Our findings will inform the design of interfaces for such interactive mixed-modality systems.","authors":[{"affiliations":["Monash University, Melbourne, Australia"],"email":"samuel.reinders@monash.edu","is_corresponding":true,"name":"Samuel Reinders"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"matthew.butler@monash.edu","is_corresponding":false,"name":"Matthew Butler"},{"affiliations":["Monash University, Clayton, Australia"],"email":"ingrid.zukerman@monash.edu","is_corresponding":false,"name":"Ingrid Zukerman"},{"affiliations":["Yonsei University, Seoul, Korea, Republic of","Microsoft Research, Redmond, United States"],"email":"b.lee@yonsei.ac.kr","is_corresponding":false,"name":"Bongshin Lee"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"lizhen.qu@monash.edu","is_corresponding":false,"name":"Lizhen Qu"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"kim.marriott@monash.edu","is_corresponding":false,"name":"Kim Marriott"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1522","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1533","abstract":"We introduce DiffFit, a differentiable algorithm for fitting protein atomistic structures into experimental reconstructed Cryo-Electron Microscopy (cryo-EM) volume map. This process is essential in structural biology to semi-automatically reconstruct large meso-scale models of complex protein assemblies and complete cellular structures that are based on measured cryo-EM data. Current approaches require manual fitting in 3D that already results in approximately aligned structures followed by an automated fine-tuning of the alignment. With our DiffFit approach, we enable domain scientists to automatically fit new structures and visualize the fitting results for inspection and interactive revision. Our fitting begins with differentiable 3D rigid transformations of the protein atom coordinates, followed by sampling the density values at its atom coordinates from the target cryo-EM volume. To ensure a meaningful correlation between the sampled densities and the protein structure, we propose a novel loss function based on a multi-resolution volume-array approach and the exploitation of the negative space. Such loss function serves as a critical metric for assessing the fitting quality, ensuring both fitting accuracy and improved visualization of the results. We assessed the placement quality of DiffFit with several large, realistic datasets and found its quality to be superior to that of previous methods. We further evaluated our method in two use cases. First, we demonstrate its use in the process of automating the integration of known composite structures into larger protein complexes. Second, we show that it facilitates the fitting of predicted protein domains into volume densities to aid researchers in the identification of unknown proteins. We implemented our algorithm as an open-source plugin (github.com/nanovis/DiffFitViewer) in ChimeraX, a leading visualization software in the field. All supplemental materials are available at osf.io/5tx4q.","authors":[{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"deng.luo@kaust.edu.sa","is_corresponding":true,"name":"Deng Luo"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"zainab.alsuwaykit@kaust.edu.sa","is_corresponding":false,"name":"Zainab Alsuwaykit"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"dawar.khan@kaust.edu.sa","is_corresponding":false,"name":"Dawar Khan"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"ondrej.strnad@kaust.edu.sa","is_corresponding":false,"name":"Ond\u0159ej Strnad"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"ivan.viola@kaust.edu.sa","is_corresponding":false,"name":"Ivan Viola"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1533","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1544","abstract":"Large Language Models (LLMs) have been successfully adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways from visualizations? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as the spatial arrangement. In this work, we examine how well LLMs can predict such design choice sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We test four common chart arrangements: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked, through three experimental phases. In Phase I, we identified the optimal configuration of LLMs to generate meaningful chart takeaways, across three LLM models (GPT3.5, GPT4, GPT4V, and Gemini 1.0 Pro), two temperature settings (0, 0.7), four chart specifications (Vega-Lite, Matplotlib, ggplot2, and scene graphs), and several prompting strategies. We found that even state-of-the-art LLMs can struggle to generate factually accurate takeaways. In Phase 2, using the most optimal LLM configuration, we generated 30 chart takeaways across the four arrangements of bar charts using two datasets, with both zero-shot and one-shot settings. Compared to data on human takeaways from prior work, we found that the takeaways LLMs generate often do not align with human comparisons. In Phase 3, we examined the effect of the charts\u2019 underlying data values on takeaway alignment between humans and LLMs, and found both matches and mismatches. Overall, our work evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human-aligned chart takeaways.","authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"wwill@cs.washington.edu","is_corresponding":true,"name":"Huichen Will Wang"},{"affiliations":["Adobe Research, Seattle, United States"],"email":"jhoffs@adobe.com","is_corresponding":false,"name":"Jane Hoffswell"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"yukithane@gmail.com","is_corresponding":false,"name":"Sao Myat Thazin Thane"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"victorbursztyn2022@u.northwestern.edu","is_corresponding":false,"name":"Victor S. Bursztyn"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"cxiong@gatech.edu","is_corresponding":false,"name":"Cindy Xiong Bearfield"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1544","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1547","abstract":"Visual validation of regression models in scatterplots is a common practice for assessing model quality, yet its efficacy remains unquantified. We conducted two empirical experiments to investigate individuals' ability to visually validate linear regression models (linear trends) and to examine the impact of common visualization designs on validation quality. The first experiment showed that the level of accuracy for visual estimation of slope (i.e., fitting a line to data) is higher than for visual validation of slope (i.e., accepting a shown line). Notably, we found bias toward slopes that are ''too steep'' in both cases. This lead to novel insights that participants naturally assessed regression with orthogonal distances between the points and the line (i.e., ODR regression) rather than the common vertical distances (OLS regression). In the second experiment, we investigated whether incorporating common designs for regression visualization (error lines, bounding boxes, and confidence intervals) would improve visual validation. Even though error lines reduced validation bias, results failed to show the desired improvements in accuracy for any design. Overall, our findings suggest caution in using visual model validation for linear trends in scatterplots.","authors":[{"affiliations":["University of Cologne, Cologne, Germany"],"email":"braun@cs.uni-koeln.de","is_corresponding":true,"name":"Daniel Braun"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"},{"affiliations":["University of Wisconsin - Madison, Madison, United States"],"email":"gleicher@cs.wisc.edu","is_corresponding":false,"name":"Michael Gleicher"},{"affiliations":["University of Cologne, Cologne, Germany"],"email":"landesberger@cs.uni-koeln.de","is_corresponding":false,"name":"Tatiana von Landesberger"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1547","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1568","abstract":"Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns in dimensionality reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections.","authors":[{"affiliations":["Tufts University, Medford, United States"],"email":"brianmontambault@gmail.com","is_corresponding":true,"name":"Brian Montambault"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["Tufts University, Boston, United States"],"email":"jen@cs.tufts.edu","is_corresponding":false,"name":"Jen Rogers"},{"affiliations":["Tufts University, Medford, United States"],"email":"camelia_daniela.brumar@tufts.edu","is_corresponding":false,"name":"Camelia D. Brumar"},{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"mingwei.li@tufts.edu","is_corresponding":false,"name":"Mingwei Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1568","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1571","abstract":"Effective security patrol management is critical for ensuring safety in diverse environments such as art galleries, airports, and factories. The behavior of patrols in these situations can be modeled by patrolling games. They simulate the behavior of the patrol and adversary in the building, which is modeled as a graph of interconnected nodes representing rooms. The designers of algorithms solving the game face the problem of analyzing complex graph layouts with temporal dependencies. Therefore, appropriate visual support is crucial for them to work effectively. In this paper, we present a novel tool that helps the designers of patrolling games explore the outcomes of the proposed algorithms and approaches, evaluate their success rate, and propose modifications that can improve their solutions. Our tool offers an intuitive and interactive interface, featuring a detailed exploration of patrol routes and probabilities of taking them, simulation of patrols, and other requested features. In close collaboration with experts in designing patrolling games, we conducted three case studies demonstrating the usage and usefulness of our tool. The prototype of the tool, along with exemplary datasets, is available at https://gitlab.fi.muni.cz/formela/strategy-vizualizer.","authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"langm@mail.muni.cz","is_corresponding":true,"name":"Mat\u011bj Lang"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"469242@mail.muni.cz","is_corresponding":false,"name":"Adam \u0160t\u011bp\u00e1nek"},{"affiliations":["Faculty of Informatics, Masaryk University, Brno, Czech Republic"],"email":"514179@mail.muni.cz","is_corresponding":false,"name":"R\u00f3bert Zvara"},{"affiliations":["Faculty of Informatics, Masaryk University, Brno, Czech Republic"],"email":"rehak@fi.muni.cz","is_corresponding":false,"name":"Vojt\u011bch \u0158eh\u00e1k"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1571","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Who Let the Guards Out: Visual Support for Patrolling Games","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1574","abstract":"The numerical extraction of vortex cores from time-dependent fluid flow attracted much attention over the past decades. A commonly agreed upon vortex definition remained elusive since a proper vortex core needs to satisfy two hard constraints: it must be objective and Lagrangian. Recent methods on objectivization met the first but not the second constraint, since there was no formal guarantee that the resulting vortex coreline is indeed a pathline of the fluid flow. In this paper, we propose the first vortex core definition that is both objective and Lagrangian. Our approach restricts observer motions to follow along pathlines, which reduces the degrees of freedoms: we only need to optimize for an observer rotation that makes the observed flow as steady as possible. This optimization succeeds along Lagrangian vortex corelines and will result in a non-zero time-partial everywhere else. By performing this optimization at each point of a spatial grid, we obtain a residual scalar field, which we call vortex deviation error. The local minima on the grid serve as seed points for a gradient descent optimization that delivers sub-voxel accurate corelines. The visualization of both 2D and 3D vortex cores is based on the separation of the movement of the vortex core and the swirling flow behavior around it. While the vortex core is represented by a pathline, the swirling motion around it is visualized by streamlines in the correct frame. We demonstrate the utility of the approach on several 2D and 3D time-dependent vector fields.","authors":[{"affiliations":["Friedrich-Alexander-University Erlangen-N\u00fcrnberg, Erlangen, Germany"],"email":"tobias.guenther@fau.de","is_corresponding":true,"name":"Tobias G\u00fcnther"},{"affiliations":["University of Magdeburg, Magdeburg, Germany"],"email":"theisel@ovgu.de","is_corresponding":false,"name":"Holger Theisel"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1574","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Objective Lagrangian Vortex Cores and their Visual Representations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1594","abstract":"The visualization community has a rich history of reflecting upon visualization design flaws. Although research in this area has remained lively, we believe it is essential to continuously revisit this classic and critical topic in visualization research by incorporating more empirical evidence from diverse sources, characterizing new design flaws, building more systematic theoretical frameworks, and understanding the underlying reasons for these flaws. To address the above gaps, this work investigated visualization design flaws through the lens of the public, constructed a framework to summarize and categorize the identified flaws, and explored why these flaws occur. Specifically, we analyzed 2227 flawed data visualizations collected from an online gallery and derived a design task-associated taxonomy containing 76 specific design flaws. These flaws were further classified into three high-level categories (i.e., misinformation, uninformativeness, unsociability) and ten subcategories (e.g., inaccuracy, unfairness, ambiguity). Next, we organized five focus groups to explore why these design flaws occur and identified seven causes of the flaws. Finally, we proposed a research agenda for combating visualization design flaws and summarize nine research opportunities.","authors":[{"affiliations":["Fudan University, Shanghai, China","Fudan University, Shanghai, China"],"email":"xingyulan96@gmail.com","is_corresponding":true,"name":"Xingyu Lan"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom","University of Edinburgh, Edinburgh, United Kingdom"],"email":"coraline.liu.dataviz@gmail.com","is_corresponding":false,"name":"Yu Liu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1594","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"I Came Across a Junk: Understanding Design Flaws of Data Visualization from the Public's Perspective","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1595","abstract":"Assigning discriminable and harmonic colors to samples according to their class labels and spatial distribution can generate attractive visualizations and facilitate data exploration. However, as the number of classes increases, it is challenging to generate a high-quality color assignment result that accommodates all classes simultaneously. A practical solution is to organize classes into a hierarchy and then dynamically assign colors during exploration. However, existing color assignment methods fall short in generating high-quality color assignment results and dynamically aligning them with hierarchical structures. To address this issue, we develop a dynamic color assignment method for hierarchical data, which is formulated as a multi-objective optimization problem. This method simultaneously considers color discriminability, color harmony, and spatial distribution at each hierarchical level. By using the colors of parent classes to guide the color assignment of their child classes, our method further promotes both consistency and clarity across hierarchical levels. We demonstrate the effectiveness of our method in generating dynamic color assignment results with quantitative experiments and a user study.","authors":[{"affiliations":["Tsinghua University, Beijing, China"],"email":"jiashu0717c@gmail.com","is_corresponding":true,"name":"Jiashu Chen"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"vicayang496@gmail.com","is_corresponding":false,"name":"Weikai Yang"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"jiazl22@mails.tsinghua.edu.cn","is_corresponding":false,"name":"Zelin Jia"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"tarolancy@gmail.com","is_corresponding":false,"name":"Lanxi Xiao"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"shixia@tsinghua.edu.cn","is_corresponding":false,"name":"Shixia Liu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1595","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Dynamic Color Assignment for Hierarchical Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1597","abstract":"In understanding and redesigning the function of proteins in modern biochemistry, protein engineers are increasingly focusing on exploring regions in proteins called loops. Analyzing various characteristics of these regions helps the experts design the transfer of the desired function from one protein to another. This process is denoted as loop grafting. We designed a set of interactive visualizations that provide experts with visual support through all the loop grafting pipeline steps. The workflow is divided into several phases, reflecting the steps of the pipeline. Each phase is supported by a specific set of abstracted 2D visual representations of proteins and their loops that are interactively linked with the 3D View of proteins. By sequentially passing through the individual phases, the user shapes the list of loops that are potential candidates for loop grafting. Finally, the actual in-silico insertion of the loop candidates from one protein to the other is performed, and the results are visually presented to the user. In this way, the fully computational rational design of proteins and their loops results in newly designed protein structures that can be further assembled and tested through in-vitro experiments. We showcase the contribution of our visual support design on a real case scenario changing the enantiomer selectivity of the engineered enzyme. Moreover, we provide the readers with the experts' feedback.","authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kiraa@mail.muni.cz","is_corresponding":false,"name":"Filip Op\u00e1len\u00fd"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"paloulbrich@gmail.com","is_corresponding":false,"name":"Pavol Ulbrich"},{"affiliations":["Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital, Brno, Czech Republic"],"email":"joan.planas@mail.muni.cz","is_corresponding":false,"name":"Joan Planas-Iglesias"},{"affiliations":["Masaryk University, Brno, Czech Republic","University of Bergen, Bergen, Norway"],"email":"xbyska@fi.muni.cz","is_corresponding":false,"name":"Jan By\u0161ka"},{"affiliations":["Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital, Brno, Czech Republic"],"email":"stourac.jan@gmail.com","is_corresponding":false,"name":"Jan \u0160toura\u010d"},{"affiliations":["Faculty of Science, Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital Brno, Brno, Czech Republic"],"email":"222755@mail.muni.cz","is_corresponding":false,"name":"David Bedn\u00e1\u0159"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"katarina.furmanova@gmail.com","is_corresponding":true,"name":"Katar\u00edna Furmanov\u00e1"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1597","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Visual Support for the Loop Grafting Workflow on Proteins","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1599","abstract":"Existing deep learning-based surrogate models facilitate efficient data generation, but fall short in uncertainty quantification, efficient parameter space exploration, and reverse prediction. In our work, we introduce SurroFlow, a novel normalizing flow-based surrogate model, to learn the invertible transformation between simulation parameters and simulation outputs. The model not only allows accurate predictions of simulation outcomes for a given simulation parameter but also supports uncertainty quantification in the data generation process. Additionally, it enables efficient simulation parameter recommendation and exploration. We integrate SurroFlow and a genetic algorithm as the backend of a visual interface to support effective user-guided ensemble simulation exploration and visualization. Our framework significantly reduces the computational costs while enhancing the reliability and exploration capabilities of scientific surrogate models.","authors":[{"affiliations":["The Ohio State University, Columbus, United States","The Ohio State University, Columbus, United States"],"email":"shen.1250@osu.edu","is_corresponding":true,"name":"JINGYI SHEN"},{"affiliations":["The Ohio State University, Columbus, United States","The Ohio State University, Columbus, United States"],"email":"duan.418@osu.edu","is_corresponding":false,"name":"Yuhan Duan"},{"affiliations":["The Ohio State University , Columbus , United States","The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1599","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1603","abstract":"Multi-modal embeddings form the foundation for vision-language models, such as CLIP embeddings, the most widely used text-image embeddings. However, these embeddings are hard to interpret and vulnerable to subtle misalignment of cross-modal features, resulting in decreased model performance and diminished generalization. To address this problem, we design ModalChorus, an interactive system for visual probing and alignment of multi-modal embeddings. ModalChorus primarily offers a two-stage process: 1) embedding probing with Modal Fusion Map (MFM), a novel parametric dimensionality reduction method that integrates both metric and nonmetric objectives to enhance modality fusion; and 2) embedding alignment that allows users to interactively articulate intentions for both point-set and set-set alignments. Quantitative and qualitative comparisons for CLIP embeddings with existing dimensionality reduction (e.g., t-SNE and MDS) and data fusion (e.g., data context map) methods demonstrate the advantages of MFM in showcasing cross-modal features over common vision-language datasets. Case studies reveal that ModalChorus can facilitate intuitive discovery of misalignment and efficient re-alignment in scenarios ranging from zero-shot classification to cross-modal retrieval and generation.","authors":[{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"yyebd@connect.ust.hk","is_corresponding":true,"name":"Yilin Ye"},{"affiliations":["The Hong Kong University of Science and Technology(Guangzhou), Guangzhou, China"],"email":"sxiao713@connect.hkust-gz.edu.cn","is_corresponding":false,"name":"Shishi Xiao"},{"affiliations":["the Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"xingchen.zeng@outlook.com","is_corresponding":false,"name":"Xingchen Zeng"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China","The Hong Kong University of Science and Technology, Hong Kong SAR, China"],"email":"weizeng@hkust-gz.edu.cn","is_corresponding":false,"name":"Wei Zeng"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1603","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1606","abstract":"With the increase of graph size, it becomes difficult or even impossible to visualize graph structures clearly within the limited screen space. Consequently, it is crucial to design effective visual representations for large graphs. In this paper, we propose AdaMotif, a novel approach that can capture the essential structure patterns of large graphs and effectively reveal the overall structures via adaptive motif designs. Specifically, our approach involves partitioning a given large graph into multiple subgraphs, then clustering similar subgraphs and extracting similar structural information within each cluster. Subsequently, adaptive motifs representing each cluster are generated and utilized to replace the corresponding subgraphs, leading to a simplified visualization. Our approach aims to preserve as much information from the subgraphs as possible, effectively simplifying graphs while minimizing information loss. Notably, our approach successfully visualizes crucial community information within a large graph. We conduct case studies and a user study using both synthetic and real-world graphs to validate the effectiveness of our proposed approach. The results demonstrate the capability of our approach in simplifying graphs while retaining important structural and community information.","authors":[{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"hzhou@szu.edu.cn","is_corresponding":true,"name":"Hong Zhou"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"laipeifeng1111@gmail.com","is_corresponding":false,"name":"Peifeng Lai"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"zhida.sun@connect.ust.hk","is_corresponding":false,"name":"Zhida Sun"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"2310274034@email.szu.edu.cn","is_corresponding":false,"name":"Xiangyuan Chen"},{"affiliations":["Shenzhen University, Shen Zhen, China"],"email":"275621136@qq.com","is_corresponding":false,"name":"Yang Chen"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"hswu@szu.edu.cn","is_corresponding":false,"name":"Huisi Wu"},{"affiliations":["Nanyang Technological University, Singapore, Singapore"],"email":"yong-wang@ntu.edu.sg","is_corresponding":false,"name":"Yong WANG"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1606","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"AdaMotif: Graph Simplification via Adaptive Motif Design","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1612","abstract":"Partitionings (or segmentations) divide a given domain into disjoint connected regions whose union forms again the entire domain. Multi-dimensional partitionings occur, for example, when analyzing parameter spaces of simulation models, where each segment of the partitioning represents a region of similar model behavior. Having computed a partitioning, one is commonly interested in understanding how large the segments are and which segments lie next to each other. While visual representations of 2D domain partitionings that reveal sizes and neighborhoods are straightforward, this is no longer the case when considering multi-dimensional domains of three or more dimensions. We propose an algorithm for computing 2D embeddings of multi-dimensional partitionings. The embedding shall have the following properties: It shall maintain the topology of the partitioning and optimize the area sizes and joint boundary lengths of the embedded segments to match the respective sizes and lengths in the multi-dimensional domain. We demonstrate the effectiveness of our approach by applying it to different use cases, including the visual exploration of 3D spatial domain segmentations and multi-dimensional parameter space partitionings of simulation ensembles. We numerically evaluate our algorithm with respect to how well sizes and lengths are preserved depending on the dimensionality of the domain and the number of segments.","authors":[{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"m_ever14@uni-muenster.de","is_corresponding":true,"name":"Marina Evers"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"linsen@uni-muenster.de","is_corresponding":false,"name":"Lars Linsen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1612","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"2D Embeddings of Multi-dimensional Partitionings","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1613","abstract":"We present a path-based design model and system for designing and creating visualisations. Our model represents a systematic approach to constructing visual representations of data or concepts following a predefined sequence of steps. The initial step involves outlining the overall appearance of the visualisation by creating a skeleton structure, referred to as a flowpath. Subsequently, we specify objects, visual marks, properties, and appearance, storing them in a gene. Lastly, we map data onto the flowpath, ensuring suitable morphisms. Alternative designs are created by exchanging values in the gene. For example, designs that share similar traits, are created by making small incremental changes to the gene. Our design method develops a wide variety of creative ideas, space-filling visualisations, and traditional designs (bar chart, pie chart etc.) Our implementation, demonstrates the model, and we apply the output visualisations onto a smart-watch and on visualisation dashboards. In this article we (1) introduce, define and explain the path model and discuss possibilities for its use, (2) present our implementation, results, and evaluation, and (3) demonstrate and evaluate an application of its use on a mobile watch.","authors":[{"affiliations":["ExaDev, Gaerwen, United Kingdom","Bangor University, Bangor, United Kingdom"],"email":"james.ogge@gmail.com","is_corresponding":false,"name":"James R Jackson"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.ritsos@bangor.ac.uk","is_corresponding":false,"name":"Panagiotis D. Ritsos"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.butcher@bangor.ac.uk","is_corresponding":false,"name":"Peter W. S. Butcher"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"j.c.roberts@bangor.ac.uk","is_corresponding":true,"name":"Jonathan C Roberts"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1613","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Path-based Design Model for Constructing and Exploring Alternative Visualisations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1615","abstract":"We present Cell2Cell, a novel visual analytics approach for quantifying and visualizing networks of cell-cell interactions in three-dimensional (3D) multi-channel cancerous tissue data. By analyzing cellular interactions, biomedical domain experts can gain a more accurate understanding of the intricate relationships between cancer and immune cells. Recent methods have focused on inferring interaction based on the proximity of cells in low-resolution 2D multi-channel imaging data. By contrast, we analyze cell interactions by quantifying the intensities of protein expressions extracted from high-resolution 3D multi-channel volume data. Such analyses have a strong exploratory nature and require a tight integration of domain experts in the analysis loop to leverage their deep knowledge. We propose two complementary semi-automated approaches to cope with the increasing size and complexity of the data in an interactive fashion: On the one hand, we interpret cell-to-cell interactions as edges in a cell graph and analyze the image signal (protein expressions) along those edges, using spatial as well as abstract data visualizations. Complementary, we propose a cell-centered approach, enabling scientists to visually analyze polarized distributions of proteins in three dimensions, which also captures neighboring cells with biochemical and cell biological consequences. We evaluate our application in two case studies, where computational biologists and medical experts use \\tool to investigate tumor micro-environments to identify and quantify T-cell activation in human tissue data. We confirmed that our tool can fully solve both use cases and enables a streamlined and detailed analysis of cell-cell interactions.","authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"eric.moerth@gmx.at","is_corresponding":true,"name":"Eric M\u00f6rth"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"kevin.sidak@univie.ac.at","is_corresponding":false,"name":"Kevin Sidak"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"zoltan_maliga@hms.harvard.edu","is_corresponding":false,"name":"Zoltan Maliga"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"torsten.moeller@univie.ac.at","is_corresponding":false,"name":"Torsten M\u00f6ller"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"peter_sorger@hms.harvard.edu","is_corresponding":false,"name":"Peter Sorger"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"pfister@seas.harvard.edu","is_corresponding":false,"name":"Hanspeter Pfister"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"jbeyer@g.harvard.edu","is_corresponding":false,"name":"Johanna Beyer"},{"affiliations":["New York University, New York, United States","Harvard University, Boston, United States"],"email":"rk4815@nyu.edu","is_corresponding":false,"name":"Robert Kr\u00fcger"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1615","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1626","abstract":"We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures---often at multiple spatial or semantic scales---across various application domains and requiring diverse visual representations for effective visualization. To understand user reactions to our new environment, we began with an elicitation user study, in which we captured their responses and interactions. We observed that users adapted their interaction approaches based on perceived visual representations, with natural transitions in spatial awareness and actions while navigating across the physical surface. Our findings then informed the development of a design space for spatial data exploration in cross-reality. We thus developed cross-reality environments tailored to three distinct domains: for 3D molecular structure data, for 3D point cloud data, and for 3D anatomical data. In particular, we designed interaction techniques that account for the inherent features of interactions in both spaces, facilitating various forms of interaction including mid-air gestures, touch interactions, pen interactions, and combinations thereof to enhance the users' sense of presence and engagement. We assessed the usability of our environment with biologists, focusing on its use for domain research. In addition, we evaluated our interaction transition designs with virtual and mixed-reality experts to gather further insights. As a result, we provide our design suggestions for the cross-reality environment, emphasizing the interaction with diverse visual representations and seamless interaction transitions between 2D and 3D spaces.","authors":[{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"lixiang.zhao17@student.xjtlu.edu.cn","is_corresponding":false,"name":"Lixiang Zhao"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"fuqi.xie20@student.xjtlu.edu.cn","is_corresponding":false,"name":"Fuqi Xie"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"hainingliang@hkust-gz.edu.cn","is_corresponding":false,"name":"Hai-Ning Liang"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"lingyun.yu@xjtlu.edu.cn","is_corresponding":true,"name":"Lingyun Yu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1626","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1632","abstract":"High-dimensional data, characterized by many features, can be difficult to visualize effectively. Dimensionality reduction techniques, such as PCA, UMAP, and t-SNE, address this challenge by projecting the data into a lower-dimensional space while preserving important relationships. TopoMap is another technique that excels at preserving the underlying structure of the data, leading to interpretable visualizations. In particular, TopoMap maps the high-dimensional data into a visual space, guaranteeing that the 0-dimensional persistence diagram of the Rips filtration of the visual space matches the one from the high-dimensional data. However, the original Topomap algorithm can be slow and its layout can be too sparse for large and complex datasets. In this paper, we propose three improvements to TopoMap: 1) a more space-efficient layout, 2) a significantly faster implementation, and 3) a novel treemap-based representation to aid the exploration of the projections. These advancements make TopoMap, now referred to as TopoMap++, a more powerful tool for visualizing high-dimensional data, similar to how t-SNE surpassed SNE in popularity.","authors":[{"affiliations":["New York University, New York City, United States"],"email":"vitoriaguardieiro@gmail.com","is_corresponding":true,"name":"Vitoria Guardieiro"},{"affiliations":["New York University, New York City, United States"],"email":"felipedeoliveira1407@gmail.com","is_corresponding":false,"name":"Felipe Inagaki de Oliveira"},{"affiliations":["Microsoft Research India, Bangalore, India"],"email":"harish.doraiswamy@microsoft.com","is_corresponding":false,"name":"Harish Doraiswamy"},{"affiliations":["University of Sao Paulo, Sao Carlos, Brazil"],"email":"gnonato@icmc.usp.br","is_corresponding":false,"name":"Luis Gustavo Nonato"},{"affiliations":["New York University, New York City, United States"],"email":"csilva@nyu.edu","is_corresponding":false,"name":"Claudio Silva"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1632","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1638","abstract":"Probability density function (PDF) curves are among the few charts on a Cartesian coordinate system that are commonly presented without y-axes. This design decision may be due to the lack of relevance of vertical scaling in normal PDFs. In fact, as long as two normal PDFs have the same mean and standard deviations (SDs), they can be scaled to occupy different amounts of vertical space while still remaining statistically identical. Because unscaled PDF height increases as SD decreases, visualization designers may find themselves tempted to vertically shrink low-SD PDFs to avoid occlusion or save white space in their figures. While irregular vertical scaling has been explored in bar and line charts, the visualization community has yet to investigate how this purely visual manipulation may affect reader comparisons of PDFs. In this paper, we present two preregistered quantitative experiments (n=600, n=401) that systematically demonstrate that vertical scaling can lead to misinterpretations of PDFs. We also test visual interventions to mitigate misinterpretation. In some contexts, we find that including a y-axis reduces this effect. Overall, we find that keeping vertical scaling consistent, and therefore maintaining equal pixel areas under PDF curves, results in the highest likelihood of accurate comparisons. Our findings provide the first insights into the impact of vertical scaling on PDFs, and reveal the complicated nature of proportional area comparisons.","authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"racquel.fygenson@gmail.com","is_corresponding":true,"name":"Racquel Fygenson"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1638","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"The Impact of Vertical Scaling on Normal Probability Density Function Plots","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1642","abstract":"Despite the development of numerous visual analytics tools for event sequence data across various domains, including, but not limited to healthcare, digital marketing, and user behavior analysis, comparing these domain-specific investigations and transferring the results to new datasets and problem areas remain challenging. Task abstractions can help us go beyond domain-specific details, but existing visualization task abstractions are insufficient for event sequence visual analytics because they primarily focus on tabular datasets and often overlook automated analytical techniques. To address this gap, we propose a domain-agnostic multi-level task framework for event sequence analysis, derived from an analysis of 58 papers that present event sequence visualization systems. Our framework consists of four levels: objective, intent, strategy, and technique. Overall objectives identify the main goals of analysis. Intents comprises five high-level approaches adopted at each analysis step: augment data, simplify data, configure data, configure visualization, and create provenance. Each intent is accomplished through a number of strategies, for instance, data simplification can be achieved through aggregation, summarization, or segmentation. Finally, each strategy can be implemented by a set of techniques depending on the input and output components. We further show that techniques can be expressed through a quartet of action-input-output-criteria. We demonstrate the framework\u2019s power through mapping case studies and discuss its similarities and differences with previous event sequence task taxonomies.","authors":[{"affiliations":["University of Maryland, College Park, College Park, United States"],"email":"kzintas@umd.edu","is_corresponding":true,"name":"Kazi Tasnim Zinat"},{"affiliations":["University of Maryland, College Park, United States"],"email":"ssakhamu@terpmail.umd.edu","is_corresponding":false,"name":"Saimadhav Naga Sakhamuri"},{"affiliations":["University of Maryland, College Park, United States"],"email":"achen151@terpmail.umd.edu","is_corresponding":false,"name":"Aaron Sun Chen"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1642","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"A Multi-Level Task Framework for Event Sequence Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1681","abstract":"In recent years, the global adoption of electric vehicles (EVs) has surged, prompting a corresponding rise in the installation of charging stations. This proliferation has underscored the importance of expediting the deployment of charging infrastructure. Both academia and industry have thus devoted to addressing the charging station location problem (CSLP) to streamline this process. However, prevailing algorithms addressing CSLP are hampered by restrictive assumptions and computational overhead, leading to a dearth of comprehensive evaluations in the spatiotemporal dimensions. Consequently, their practical viability is restricted. Moreover, the placement of charging stations exerts a significant impact on both the road network and the power grid, which necessitates the evaluation of the potential post-deployment impacts on these interconnected networks holistically. In this study, we propose CSLens, a visual analytics system designed to inform charging station deployment decisions through the lens of coupled transportation and power networks. CSLens offers multiple visualizations and interactive features, empowering users to delve into the existing charging station layout, explore alternative deployment solutions, and assess the ensuring impact. To validate the efficacy of CSLens, we conducted two case studies and engaged in interviews with domain experts. Through these efforts, we substantiated the usability and practical utility of CSLens in enhancing the decision-making process surrounding charging station deployment. Our findings underscore CSLens\u2019s potential to serve as a valuable asset in navigating the complexities of charging infrastructure planning.","authors":[{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"zhangyt85@mail2.sysu.edu.cn","is_corresponding":false,"name":"Yutian Zhang"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"xulw8@mail2.sysu.edu.cn","is_corresponding":false,"name":"Liwen Xu"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"taoshc@mail2.sysu.edu.cn","is_corresponding":false,"name":"Shaocong Tao"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"guanqx3@mail.sysu.edu.cn","is_corresponding":false,"name":"Quanxue Guan"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"zenghp5@mail.sysu.edu.cn","is_corresponding":true,"name":"Haipeng Zeng"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1681","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"CSLens: Towards Better Deploying Charging Stations via Visual Analytics \u2014\u2014 A Coupled Networks Perspective","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1693","abstract":"We introduce a visual analysis method for multiple causality graphs with different outcome variables, namely, multi-outcome causality graphs. Multi-outcome causality graphs are important in healthcare for understanding multimorbidity and comorbidity. To support the visual analysis, we collaborated with medical experts to devise two comparative visualization techniques at different stages of the analysis process. First, a progressive visualization method is proposed for comparing multiple state-of-the-art causal discovery algorithms. The method can handle mixed-type datasets comprising both continuous and categorical variables and assist in the creation of a fine-tuned causality graph of a single outcome. Second, a comparative graph layout technique and specialized visual encodings are devised for the quick comparison of multiple causality graphs. In our visual analysis approach, analysts start by building individual causality graphs for each outcome variable, and then, multi-outcome causality graphs are generated and visualized with our comparative technique for analyzing differences and commonalities of these causality graphs. Evaluation includes quantitative measurements on benchmark datasets, a case study with a medical expert, and expert user studies with real-world health research data.","authors":[{"affiliations":["Institute of Medical Technology, Peking University Health Science Center, Beijing, China","National Institute of Health Data Science, Peking University, Beijing, China"],"email":"mengjiefan@bjmu.edu.cn","is_corresponding":true,"name":"Mengjie Fan"},{"affiliations":["Beihang University, Beijing, China","Peking University, Beijing, China"],"email":"yu.jinlu@qq.com","is_corresponding":false,"name":"Jinlu Yu"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"},{"affiliations":["Tongji College of Design and Innovation, Shanghai, China"],"email":"nan.cao@gmail.com","is_corresponding":false,"name":"Nan Cao"},{"affiliations":["Beijing University of Chinese Medicine, Beijing, China"],"email":"wanghuaiyuelva@126.com","is_corresponding":false,"name":"Huaiyu Wang"},{"affiliations":["Peking University, Beijing, China"],"email":"zhoulng@pku.edu.cn","is_corresponding":false,"name":"Liang Zhou"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1693","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Visual Analysis of Multi-outcome Causal Graphs","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1699","abstract":"Room-scale immersive data visualisations provide viewers a wide-scale overview of a large dataset, but to interact precisely with individual data points they typically have to navigate to change their point of view. In traditional screen-based visualisations, focus-and-context techniques allow visualisation users to keep a full dataset in view while making detailed selections. Such techniques have been studied extensively on desktop to allow precise selection within large data sets, but they have not been explored in immersive 3D modalities. In this paper we develop a novel immersive focus-and-context technique based on a ``magic portal'' metaphor adapted specifically for data visualisation scenarios. An extendable-hand interaction technique is used to place a portal close to the region of interest. The other end of the portal then opens comfortably within the user's physical reach such that they can reach through to precisely select individual data points. Through a controlled study with 24 participants, we find strong evidence that portals reduce overshoots in selection and overall hand trajectory length, reducing arm fatigue compared to ranged interaction without the portal. The portals also enable us to use a robot arm to provide haptic feedback for data within the limited volume of the portal region. We demonstrate applications for portal-based selection through two use-case scenarios.","authors":[{"affiliations":["Monash University, Melbourne, Australia"],"email":"dai.shaozhang@gmail.com","is_corresponding":true,"name":"Shaozhang Dai"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"yi.li5@monash.edu","is_corresponding":false,"name":"Yi Li"},{"affiliations":["The University of British Columbia (Okanagan Campus), Kelowna, Canada"],"email":"barrett.ens@ubc.ca","is_corresponding":false,"name":"Barrett Ens"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":false,"name":"Lonni Besan\u00e7on"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"tgdwyer@gmail.com","is_corresponding":false,"name":"Tim Dwyer"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1699","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1705","abstract":"Contour trees describe the topology of level sets in scalar fields and are widely used in topological data analysis and visualization. A main challenge for utilizing contour trees for large-scale scientific data is their computation at scale using high-performance computing. To address this challenge, recent work has introduced distributed hierarchical contour trees for distributed computation and storage of contour trees. However, effective use of these distributed structures in analysis and visualization requires subsequent computation of geometric properties and branch decomposition to support contour extraction and exploration. In this work, we introduce distributed algorithms for augmentation, hypersweeps, and branch decomposition that enable parallel computation of geometric properties, and support the use of distributed contour trees as a query structure for scientific exploration. We evaluate the parallel performance of these algorithms and apply them to identify and extract important contours for scientific visualization.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"mingzhefluorite@gmail.com","is_corresponding":true,"name":"Mingzhe Li"},{"affiliations":["University of Leeds, Leeds, United Kingdom"],"email":"h.carr@leeds.ac.uk","is_corresponding":false,"name":"Hamish Carr"},{"affiliations":["Lawrence Berkeley National Laboratory, Berkeley, United States"],"email":"oruebel@lbl.gov","is_corresponding":false,"name":"Oliver R\u00fcbel"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"},{"affiliations":["Lawrence Berkeley National Laboratory, Berkeley, United States"],"email":"ghweber@lbl.gov","is_corresponding":false,"name":"Gunther H Weber"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1705","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1708","abstract":"The widespread use of Deep Neural Networks (DNNs) has recently resulted in their application to challenging scientific visualization tasks. While advanced DNNs demonstrate impressive generalization abilities, understanding factors like prediction quality, confidence, robustness, and uncertainty is crucial. These insights aid application scientists in making informed decisions. However, DNNs lack inherent mechanisms to measure prediction uncertainty, prompting the creation of distinct frameworks for constructing robust uncertainty-aware models tailored to various visualization tasks. In this work, we develop uncertainty-aware implicit neural representations to model steady-state vector fields effectively. We comprehensively evaluate the efficacy of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout, aimed at enabling uncertainty-informed visual analysis of features within steady vector field data. Our detailed exploration using several vector data sets indicate that uncertainty-aware models generate informative visualization results of vector field features. Furthermore, incorporating prediction uncertainty improves the resilience and interpretability of our DNN model, rendering it applicable for the analysis of complex vector field data sets.","authors":[{"affiliations":["Indian Institute of Technology Kanpur , Kanpur, India"],"email":"atulkrfcb@gmail.com","is_corresponding":false,"name":"Atul Kumar"},{"affiliations":["Indian Institute of Technology Kanpur , Kanpur , India"],"email":"gsiddharth2209@gmail.com","is_corresponding":false,"name":"Siddharth Garg"},{"affiliations":["Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India"],"email":"soumya.cvpr@gmail.com","is_corresponding":true,"name":"Soumya Dutta"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1708","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1726","abstract":"User experience in data visualization is typically assessed through post-viewing self-reports, but these overlook the dynamic cognitive processes during interaction. This study explores the use of mind wandering as a dynamic measure during visualization exploration. Participants reported mind wandering while viewing visualizations from a pre-labeled visualization database and then provided quantitative ratings of trust, engagement, and design quality, along with qualitative descriptions and short-term/long-term recall assessments. Results show that mind wandering negatively affects short-term visualization recall and various post-viewing measures, particularly for visualizations with little text annotation. Further, the type of mind wandering impacts engagement and emotional response. Mind wandering also acts as a serial mediator between visualization design elements and post-viewing measures. Overall, this research underscores the importance of incorporating mind wandering as a dynamic measure in visualization design and evaluation, offering novel avenues for enhancing user engagement and comprehension.","authors":[{"affiliations":["Arizona State University, Tempe, United States"],"email":"aarunku5@asu.edu","is_corresponding":true,"name":"Anjana Arunkumar"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"},{"affiliations":["Arizona State University, Tempe, United States"],"email":"cbryan16@asu.edu","is_corresponding":false,"name":"Chris Bryan"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1726","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1730","abstract":"Understanding the input and output of data wrangling scripts is crucial for various tasks like debugging codes and onboarding new data. However, existing research on script understanding primarily focuses on revealing the process of data transformations, lacking the ability to analyze the potential scope, i.e., the space of script inputs and outputs. Meanwhile, constructing input/output space during script analysis is challenging, as the wrangling scripts could be semantically complex and diverse, and the association between different data objects is intricate. To facilitate data workers in understanding the input and output spaces of wrangling scripts, we summarize ten types of constraints to express table spaces, and build a mapping between data transformations and these constraints to guide the construction of the input/output for individual transformations. Then, we propose a constraint generation model for integrating table constraints across multiple transformations. Based on the model, we develop Ferry, an interactive system that extracts and visualizes the data constraints describing the input and output spaces of data wrangling scripts, thereby enabling users to grasp the high-level semantics of complex scripts and locate the origins of faulty data transformations. Besides, Ferry provides example input and output data to assist users in interpreting the extracted constraints, checking and resolving the conflicts between these constraints and any uploaded dataset. Ferry's effectiveness and usability are evaluated via a usage scenario and two case studies: the first assists users in onboarding new data and debugging scripts, while the second verifies input-output compatibility across data processing modules. Furthermore, an illustrative application is presented to demonstrate Ferry's flexibility.","authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"rickyluozs@gmail.com","is_corresponding":true,"name":"Zhongsu Luo"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"kaixiong@zju.edu.cn","is_corresponding":false,"name":"Kai Xiong"},{"affiliations":["Zhejiang University, Hangzhou,Zhejiang, China"],"email":"3220105578@zju.edu.cn","is_corresponding":false,"name":"Jiajun Zhu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"chenran928@zju.edu.cn","is_corresponding":false,"name":"Ran Chen"},{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":["Zhejiang University, Ningbo, China"],"email":"dweng@zju.edu.cn","is_corresponding":false,"name":"Di Weng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1730","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1738","abstract":"As a step towards improving visualization literacy, we investigated how students approach reading visualizations differently after taking a university-level visualization course. We asked students to verbally walk through their process of making sense of unfamiliar visualizations, and conducted a qualitative analysis of these walkthroughs. Our qualitative analysis found changes in students' walkthroughs consistent with explicit learning goals of visualization courses. After taking a visualization course, students also engaged with visualizations in more sophisticated ways not fully captured by explicit learning goals: they were more likely to exhibit design empathy by thinking critically about the tradeoffs behind why a chart was designed in a particular way, and were better able to deconstruct a chart to make sense of it. We also gave students a quantitative assessment of visualization literacy and found no evidence of scores improving after the class, likely because the test we used focused on a different set of skills than those emphasized in visualization classes. While current measurement instruments for visualization literacy are useful, we propose developing standardized assessments for additional aspects of visualization literacy, such as deconstruction and design empathy. We also suggest those additional aspects could be made more explicit in learning goals set by visualization educators. All supplemental materials are available at https://osf.io/w5pum/?view_only=f9eca3fa4711425582d454031b9c482e.","authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"maryam.hedayati@u.northwestern.edu","is_corresponding":true,"name":"Maryam Hedayati"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1738","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"What University Students Learn In Visualization Classes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1746","abstract":"Hypergraphs provide a natural way to represent polyadic relationships in network data. For large hypergraphs, it is often difficult to visually detect structures within the data. Recently, a scalable polygon-based visualization framework was developed allowing hypergraphs with thousands of hyperedges to be simplified and examined at different levels of detail. However, this approach does not consider structures such as cycles, bridges, and branches. Consequently, structures can be lost at simplified scales, making interpretations for real-world applications unreliable. In this paper, we define hypergraph structures using the bipartite graph representation. Powered by our analysis, we provide an algorithm to decompose large hypergraphs into meaningful features and to identify regions of non-planarity. We also introduce a set of topology preserving and topology altering atomic operations, enabling the preservation of important structures while removing topological noise in simplified scales. We demonstrate our approach in several real-world applications.","authors":[{"affiliations":["Oregon State University, Corvallis, United States"],"email":"oliverpe@oregonstate.edu","is_corresponding":false,"name":"Peter D Oliver"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhange@eecs.oregonstate.edu","is_corresponding":true,"name":"Eugene Zhang"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhangyue@oregonstate.edu","is_corresponding":false,"name":"Yue Zhang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1746","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Structure-Aware Simplification for Hypergraph Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1770","abstract":"The semantic similarity between documents of a text corpus can be visualized using map-like metaphors based on two-dimensional scatterplot layouts. These layouts result from a dimensionality reduction on the document-term matrix or a representation within a latent embedding, including topic models. Thereby, the resulting layout depends on the input data and hyperparameters of the dimensionality reduction and is therefore affected by changes in them. Furthermore, the resulting layout is affected by changes in the input data and hyperparameters of the dimensionality reduction. However, such changes to the layout require additional cognitive efforts from the user. In this work, we present a sensitivity study that analyzes the stability of these layouts concerning (1) changes in the text corpora, (2) changes in the hyperparameter, and (3) randomness in the initialization. Our approach has two stages: data measurement and data analysis. First, we derived layouts for the combination of three text corpora and six text embeddings and a grid-search-inspired hyperparameter selection of the dimensionality reductions. Afterward, we quantified the similarity of the layouts through ten metrics, concerning local and global structures and class separation. Second, we analyzed the resulting 42817 tabular data points in a descriptive statistical analysis. From this, we derived guidelines for informed decisions on the layout algorithm and highlight specific hyperparameter settings. We provide our implementation and results as a Git repository at https://github.com/hpicgs/Topic-Models-and-Dimensionality-Reduction-Sensitivity-Study .","authors":[{"affiliations":["University of Potsdam, Digital Engineering Faculty, Hasso Plattner Institute, Potsdam, Germany"],"email":"daniel.atzberger@hpi.de","is_corresponding":true,"name":"Daniel Atzberger"},{"affiliations":["University of Potsdam, Potsdam, Germany"],"email":"tcech@uni-potsdam.de","is_corresponding":false,"name":"Tim Cech"},{"affiliations":["Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany"],"email":"willy.scheibel@hpi.de","is_corresponding":false,"name":"Willy Scheibel"},{"affiliations":["Hasso Plattner Institute"],"email":"juergen.doellner@hpi.de","is_corresponding":false,"name":"J\u00fcrgen D\u00f6llner"},{"affiliations":["Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany"],"email":"m.behrisch@uu.nl","is_corresponding":false,"name":"Michael Behrisch"},{"affiliations":["Utrecht University, Utrecht, Netherlands"],"email":"tobias.schreck@cgv.tugraz.at","is_corresponding":false,"name":"Tobias Schreck"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1770","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1793","abstract":"This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale complexes, and the discrepancies in topology between original and decompressed datasets could potentially result in erroneous interpretations or even incorrect scientific conclusions. In this paper, we focus on preserving Morse-Smale segmentations in 2D/3D piecewise linear scalar fields, targeting the precise reconstruction of minimum/maximum labels induced by the integral curve of each vertex. The key is to derive a series of edits during compression time; the edits are applied to the decompressed data, leading to an accurate reconstruction of segmentations while keeping the error within the prescribed error bound. To this end, we developed a workflow to fix extrema and integral curves alternatively until convergence within finite iterations; we accelerate each workflow component with shared-memory/GPU parallelism to make the performance practical for coupling with compressors. We demonstrate use cases with fluid dynamics, ocean, and cosmology application datasets with a 1000x acceleration with an NVIDIA A100 GPU.","authors":[{"affiliations":["The Ohio State University, Columbus, United States"],"email":"li.14025@osu.edu","is_corresponding":true,"name":"Yuxiao Li"},{"affiliations":["University of California, Riverside, Riverside, United States"],"email":"xlian007@ucr.edu","is_corresponding":false,"name":"Xin Liang"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"qiu.722@osu.edu","is_corresponding":false,"name":"Yongfeng Qiu"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"lyan@anl.gov","is_corresponding":false,"name":"Lin Yan"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1793","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1802","abstract":"In the biomedical domain, visualizing the document embeddings of an extensive corpus has been widely used in information-seeking tasks. However, three key challenges with existing visualizations make it difficult for clinicians to find information efficiently. First, the document embeddings used in these visualizations are generated statically by pretrained language models, which cannot adapt to the user's evolving interest. Second, existing document visualization techniques cannot effectively display how the documents are relevant to users\u2019 interest, making it difficult for users to identify the most pertinent information. Third, existing embedding generation and visualization processes suffer from a lack of interpretability, making it difficult to understand, trust and use the result for decision-making. In this paper, we present a novel visual analytics pipeline for user-driven document representation and iterative information seeking (VADIS). VADIS introduces a prompt-based attention model (PAM) that generates dynamic document embedding and document relevance adjusted to the user's query. To effectively visualize these two pieces of information, we design a new document map that leverages a circular grid layout to display documents based on both their relevance to the query and the semantic similarity. Additionally, to improve the interpretability, we introduce a corpus-level attention visualization method to improve the user's understanding of the model focus and to enable the users to identify potential oversight. This visualization, in turn, empowers users to refine, update and introduce new queries, thereby facilitating a dynamic and iterative information-seeking experience. We evaluated VADIS quantitatively and qualitatively on a real-world dataset of biomedical research papers to demonstrate its effectiveness.","authors":[{"affiliations":["Ohio State University, Columbus, United States"],"email":"qiu.580@buckeyemail.osu.edu","is_corresponding":true,"name":"Rui Qiu"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"tu.253@osu.edu","is_corresponding":false,"name":"Yamei Tu"},{"affiliations":["Washington University School of Medicine in St. Louis, St. Louis, United States"],"email":"yenp@wustl.edu","is_corresponding":false,"name":"Po-Yin Yen"},{"affiliations":["The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1802","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1803","abstract":"Scalar field comparison is a fundamental task in scientific visualization. In topological data analysis, we compare topological descriptors of scalar fields---such as persistence diagrams and merge trees---as they provide succinct and robust abstract representations. While several similarity measures for topological descriptors seem to be both asymptotically and practically efficient with polynomial time algorithms, they do not scale well when handling large-scale, time-varying scientific data and ensembles. In this paper, we propose a new framework to facilitate the comparative analysis of merge trees, inspired by tools from locality sensitive hashing (LSH). LSH hashes similar objects into the same hash buckets with high probability. We propose two new similarity measures for merge trees that can be computed via LSH, using new extensions to Recursive MinHash and subpath signature, respectively. Our similarity measures are extremely efficient to compute and closely resemble the results of existing measures such as merge tree edit distance or geometric interleaving distance. Our experiments demonstrate the utility of our LSH framework in applications such as shape matching, clustering, key event detection, and ensemble summarization.","authors":[{"affiliations":["University of Utah, SALT LAKE CITY, United States"],"email":"lyuweiran@gmail.com","is_corresponding":false,"name":"Weiran Lyu"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"g.s.raghavendra@gmail.com","is_corresponding":true,"name":"Raghavendra Sridharamurthy"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jeffp@cs.utah.edu","is_corresponding":false,"name":"Jeff M. Phillips"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1803","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1805","abstract":"he optimization of cooling systems is important in many cases, for example for cabin and battery cooling in electric cars. Such an optimization is governed by multiple, conflicting objectives and it is performed across a multi-dimensional parameter space. The extent of the parameter space, the complexity of the non-linear model of the system, as well as the time needed per simulation run and factors that are not modeled in the simulation necessitate an iterative, semi-automatic approach. We present an interactive visual optimization approach, where the user works with a p-h diagram to steer an iterative, guided optimization process. A deep learning (DL) model provides estimates for parameters, given a target characterization of the system, while numerical simulation is used to predict system characteristics for an ensemble of parameter sets. Since the DL model only serves as an approximation of the inverse of the cooling system and since target characteristics can be chosen according to different, competing objectives, an iterative optimization process is realized, developing multiple sets of intermediate solutions, which are visually related to each other. The standard p-h diagram, integrated interactively in this approach, is complemented by a dual, also interactive visual representation of additional expressive measures representing the system characteristics. We show how the known four-points semantic of the p-h diagram meaningfully transfers to the dual data representation. When evaluating this approach with our partners in the automotive domain, we found that our solution helped with the overall comprehension of the cooling system and that it lead to a faster convergence during optimization.","authors":[{"affiliations":["VRVis Research Center, Vienna, Austria"],"email":"splechtna@vrvis.at","is_corresponding":false,"name":"Rainer Splechtna"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"behravan@vt.edu","is_corresponding":false,"name":"Majid Behravan"},{"affiliations":["AVL AST doo, Zagreb, Croatia"],"email":"mario.jelovic@avl.com","is_corresponding":false,"name":"Mario Jelovic"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"gracanin@vt.edu","is_corresponding":false,"name":"Denis Gracanin"},{"affiliations":["University of Bergen, Bergen, Norway"],"email":"helwig.hauser@uib.no","is_corresponding":false,"name":"Helwig Hauser"},{"affiliations":["VRVis Research Center, Vienna, Austria"],"email":"matkovic@vrvis.at","is_corresponding":true,"name":"Kresimir Matkovic"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1805","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Interactive Design-of-Experiments: Optimizing a Cooling System","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1809","abstract":"Visualizing relational data is crucial for understanding complex connections between entities in social networks, political affiliations, or biological interactions. Well-known representations like node-link diagrams and adjacency matrices offer valuable insights, but their effectiveness relies on the ability to identify patterns in the underlying topological structure. Reordering strategies and layout algorithms play a vital role in the visualization process since the arrangement of nodes, edges, or cells influences the visibility of these patterns. The BioFabric visualization combines elements of node-link diagrams and adjacency matrices, leveraging the strengths of both, the visual clarity of node-link diagrams and the tabular organization of adjacency matrices. A unique characteristic of BioFabric is the possibility to reorder nodes and edges separately. This raises the question of which combination of layout algorithms best reveals certain patterns. In this paper, we discuss patterns and anti-patterns in BioFabric, such as staircases or escalators, relate them to already established patterns, and propose metrics to evaluate their quality. Based on these quality metrics, we compared combinations of well-established reordering techniques applied to BioFabric with a well-known benchmark data set. Our experiments indicate that the edge order has a stronger influence on revealing patterns than the node layout. The results show that the best combination for revealing staircases is a barycentric node layout, together with an edge order based on node indices and length. Our research contributes a first building block for many promising future research directions, which we also share and discuss. A free copy of this paper and all supplemental materials are available at OSF.","authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"fuchs@dbvis.inf.uni-konstanz.de","is_corresponding":true,"name":"Johannes Fuchs"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"alexander.frings@uni-konstanz.de","is_corresponding":false,"name":"Alexander Frings"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"maria-viktoria.heinle@uni-konstanz.de","is_corresponding":false,"name":"Maria-Viktoria Heinle"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"sara.di-bartolomeo@uni-konstanz.de","is_corresponding":false,"name":"Sara Di Bartolomeo"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1809","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1810","abstract":"Classical bibliography, by scrutinizing preserved catalogs from both official archives and personal collections of accumulated books, examines the books throughout history, thereby elucidating cultural development across historical periods. In this work, we collaborate with domain experts to accomplish the task of data annotation concerning Chinese ancient catalogs. We introduce the CataAnno system that facilitates users in completing annotations more efficiently through cross-linked views, recommendation methods and convenient annotation interactions. The recommendation method can learn the background knowledge and annotation patterns that experts subconsciously integrate into the data during prior annotation processes. CataAnno searches for the most relevant examples previously annotated and recommends to the user. Meanwhile, the cross-linked views assist users in comprehending the correlations between entries and offer explanations for these recommendations. Evaluation and expert feedback confirm that the CataAnno system, by offering high-quality recommendations and visualizing the relationships between entries, can mitigate the necessity for specialized knowledge during the annotation process. This results in enhanced accuracy and consistency in annotations, thereby enhancing the overall efficiency","authors":[{"affiliations":["Peking University, Beijing, China"],"email":"hanning.shao@pku.edu.cn","is_corresponding":true,"name":"Hanning Shao"},{"affiliations":["Peking University, Beijing, China"],"email":"xiaoru.yuan@pku.edu.cn","is_corresponding":false,"name":"Xiaoru Yuan"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1810","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1830","abstract":"Over the past decade, several urban visual analytics systems have been proposed to tackle a host of challenges faced by cities, in areas as diverse as transportation, weather, and real estate. Many of these systems have been designed through engagement with urban experts, aiming to distill intricate urban analysis workflows into interactive visualizations and interfaces. The design, implementation, and practical use of these systems, however, still rely on siloed approaches that lead to bespoke tools that are hard to reproduce and extend. At the design level, these systems undervalue rich data workflows from urban experts by usually only treating them as data providers and evaluators. At the implementation level, these systems lack interoperability with other technical frameworks. At the practical use level, these systems tend to be narrowly focused on specific fields, inadvertently creating barriers for cross-domain collaboration. To tackle these gaps, we present Curio, a framework for collaborative urban visual analytics. Curio uses a dataflow model with multiple abstraction levels (code, grammar, GUI elements) to facilitate collaboration across the design and implementation of visual analytics components. The framework allows experts to intertwine preprocessing, managing, and visualization stages while tracking provenance of code and visualizations. In collaboration with urban experts, we evaluate Curio through a diverse series of use cases targeting urban accessibility, urban microclimate, and sunlight access. These cases use different types of urban data and domain methodologies to illustrate Curio's flexibility in tackling pressing societal challenges.","authors":[{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"gmorei3@uic.edu","is_corresponding":false,"name":"Gustavo Moreira"},{"affiliations":["Massachusetts Institute of Technology , Somerville, United States"],"email":"maryamh@mit.edu","is_corresponding":false,"name":"Maryam Hosseini"},{"affiliations":["University of Illinois Urbana-Champaign, Urbana-Champaign, United States"],"email":"carolinavfs@id.uff.br","is_corresponding":false,"name":"Carolina Veiga Ferreira de Souza"},{"affiliations":["Universidade Federal Fluminense, Niteroi, Brazil"],"email":"lucasalexandre.s.cc@gmail.com","is_corresponding":false,"name":"Lucas Alexandre"},{"affiliations":["Politecnico di Milano, Milano, Italy"],"email":"nicola.colaninno@polimi.it","is_corresponding":false,"name":"Nicola Colaninno"},{"affiliations":["Universidade Federal Fluminense, Niter\u00f3i, Brazil"],"email":"danielcmo@ic.uff.br","is_corresponding":false,"name":"Daniel de Oliveira"},{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"nivan@cin.ufpe.br","is_corresponding":false,"name":"Nivan Ferreira"},{"affiliations":["Universidade Federal Fluminense , Niteroi, Brazil"],"email":"mlage@ic.uff.br","is_corresponding":false,"name":"Marcos Lage"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"fabiom@uic.edu","is_corresponding":true,"name":"Fabio Miranda"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1830","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1831","abstract":"When using exploratory visual analysis to examine multivariate hierarchical data, users often need to query data to narrow down the scope of analysis. However, formulating effective query expressions remains a challenge for multivariate hierarchical data, particularly when datasets become very large. To address this issue, we develop a declarative grammar, HiRegEx (Hierarchical data Regular Expression), for querying and exploring multivariate hierarchical data. Rooted in the extended multi-level task topology framework for tree visualizations (e-MLTT), HiRegEx delineates three query targets (node, path, and subtree) and two aspects for querying these targets (features and positions), and uses operators developed based on classical regular expressions for query construction. We develop a prototype system, TreeQueryER, to integrate an exploratory framework for querying and exploring multivariate hierarchical data based on HiRegEx. The exploratory framework includes three major components: top-down pattern specification, bottom-up data-driven inquiry, and context-creation data overview. We validate the expressiveness of HiRegEx with the tasks from the e-MLTT framework and showcase its utility and effectiveness through a usage scenario involving expert users in the analysis of a citation tree dataset.","authors":[{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"guozhg.li@gmail.com","is_corresponding":true,"name":"Guozheng Li"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"haotian.mi1@gmail.com","is_corresponding":false,"name":"haotian mi"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"liuchi02@gmail.com","is_corresponding":false,"name":"Chi Harold Liu"},{"affiliations":["Ochanomizu University, Tokyo, Japan"],"email":"itot@is.ocha.ac.jp","is_corresponding":false,"name":"Takayuki Itoh"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"wanggrbit@126.com","is_corresponding":false,"name":"Guoren Wang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1831","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1833","abstract":"The concept of an intelligent augmented reality (AR) assistant has applications as significant as they are wide-ranging, with potential uses in medicine, military endeavors, and mechanics. Such an assistant must be able to perceive the performer\u2019s environment and actions, reason about the state of the environment in relation to a given task, and seamlessly interact with the performer. These interactions typically involve an AR headset equipped with a variety of sensors which capture video, audio, and haptic feedback. Previous works have sought to facilitate the development of such an assistant by visualizing these sensor data streams as well as the machine learning model outputs that support an assistant\u2019s perception and reasoning capabilities. However, existing visual analytics systems do not include biometric data or focus on user modeling, and are only capable of visualizing a single task session for a single performer at a time. Furthermore, they mainly focus on traditional task analysis that typically assumes a linear progression from one step to the next. We propose a visual analytics system that allows users to compare performance during multiple task sessions focusing on non-linear tasks where different paths or sequences can lead to the successful completion of the task. In particular, we design visualizations for understanding user behavior through functional near-infrared spectroscopy (fNIRS) data as a proxy for perception, attention, and memory as well as corresponding motion data (acceleration, angular velocity, and eye gaze). We distill these insights into visual embeddings that allow users to easily select groups of sessions with similar behaviors. We provide case studies that explore how insights into task performance can be gleaned from these visualizations using data collected during helicopter copilot training tasks. Finally, we evaluate our approach by conducting an in-depth examination of a think-aloud experiment with five domain experts.","authors":[{"affiliations":["New York University, New York, United States"],"email":"s.castelo@nyu.edu","is_corresponding":true,"name":"Sonia Castelo Quispe"},{"affiliations":["New York University, New York, United States"],"email":"jlrulff@gmail.com","is_corresponding":false,"name":"Jo\u00e3o Rulff"},{"affiliations":["New York University, Brooklyn, United States"],"email":"pss442@nyu.edu","is_corresponding":false,"name":"Parikshit Solunke"},{"affiliations":["New York University, New York, United States"],"email":"erin.mcgowan@nyu.edu","is_corresponding":false,"name":"Erin McGowan"},{"affiliations":["New York University, New York CIty, United States"],"email":"guandewu@nyu.edu","is_corresponding":false,"name":"Guande Wu"},{"affiliations":["New York University, Brooklyn, United States"],"email":"iran@ccrma.stanford.edu","is_corresponding":false,"name":"Iran Roman"},{"affiliations":["New York University, New York, United States"],"email":"rlopez@nyu.edu","is_corresponding":false,"name":"Roque Lopez"},{"affiliations":["New York University, Brooklyn, United States"],"email":"bs3639@nyu.edu","is_corresponding":false,"name":"Bea Steers"},{"affiliations":["New York University, New York, United States"],"email":"qisun@nyu.edu","is_corresponding":false,"name":"Qi Sun"},{"affiliations":["New York University, New York, United States"],"email":"jpbello@nyu.edu","is_corresponding":false,"name":"Juan Pablo Bello"},{"affiliations":["Northrop Grumman Mission Systems, Redondo Beach, United States"],"email":"bradley.feest@ngc.com","is_corresponding":false,"name":"Bradley S Feest"},{"affiliations":["Northrop Grumman, Aurora, United States"],"email":"michael.middleton@ngc.com","is_corresponding":false,"name":"Michael Middleton"},{"affiliations":["Northrop Grumman, Falls Church, United States"],"email":"ryan.mckendrick@ngc.com","is_corresponding":false,"name":"Ryan McKendrick"},{"affiliations":["New York University, New York City, United States"],"email":"csilva@nyu.edu","is_corresponding":false,"name":"Claudio Silva"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1833","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1836","abstract":"Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Although shapes can be a finite number compared to colors, they can not be represented by a numerical space, making it difficult to propose a general guideline for shape choices or shed light on the design heuristics of designer-crafted shape palettes. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks -- relative mean judgment tasks, expert choices, and data correlation estimation. Given how complex and tangled results are, rather than relying on conventional features for modeling, we built a model and introduced a corresponding design tool that offers recommendations for shape encodings. The perceptual effectiveness of shapes significantly varies across specific pairs, and certain shapes may enhance perceptual efficiency and accuracy. However, how performance varies does not map well to classical features of shape such as angles, fill, or convex hull. We developed a model based on pairwise relations between shapes measured in our experiments and the number of shapes required to intelligently recommend shape palettes for a given design. This tool provides designers with agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances the understanding of shape perception in visualization contexts and provides practical design guidelines for advanced shape usage in visualization design that optimize perceptual efficiency.","authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"chint@cs.unc.edu","is_corresponding":true,"name":"Chin Tseng"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":false,"name":"Ghulam Jilani Quadri"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1836","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"An Empirically Grounded Approach for Designing Shape Palettes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1865","abstract":"In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics (IVD) consumables poses a significant threat to patients. Objective data-driven decision making on the severity of contamination is key for reducing risk to patients, while saving time and cost in the quality assessment process. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings of the current process are analysis problems, like weak support in exploring thousands of particle images, associated attributes, and ineffective knowledge externalization for sense-making. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study\u2019s learnings, and a generalizable approach for knowledge externalization. DaedalusData is a visual analytics system that empowers domain experts to explore particle contamination patterns, to label particles in label alphabets, and to externalize knowledge through semi-supervised label-informed data projections. The results of our case study show that DaedalusData supports experts in generating meaningful, comprehensive data overviews. Additionally, our user study evaluation shows high usability of DaedalusData and efficiently supports the labeling of large quantities of particles, and utilizes externalized knowledge to augment the dataset. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalabilty and trade-offs that come with the adoption of this approach in practice.","authors":[{"affiliations":["University of Z\u00fcrich, Z\u00fcrich, Switzerland","Roche pRED, Basel, Switzerland"],"email":"alexander.wyss@protonmail.com","is_corresponding":true,"name":"Alexander Wyss"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"gab.morgenshtern@gmail.com","is_corresponding":false,"name":"Gabriela Morgenshtern"},{"affiliations":["Roche Diagnostics International, Rotkreuz, Switzerland"],"email":"a.hirschhuesler@gmail.com","is_corresponding":false,"name":"Amanda Hirsch-H\u00fcsler"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"bernard@ifi.uzh.ch","is_corresponding":false,"name":"J\u00fcrgen Bernard"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1865","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1866","abstract":"Feature grid Scene Representation Networks (SRNs) have been applied to scientific data as compact functional surrogates for analysis and visualization. As SRNs are black-box lossy data representations, assessing the prediction quality is critical for scientific visualization applications to ensure that scientists can trust the information being visualized. Currently, existing architectures do not support inference time reconstruction quality assessment, as voxel-wise errors cannot be evaluated in the absence of ground truth data. By employing uncertain neural network architectures in feature grid SRNs, we obtain prediction variances during inference time to facilitate confidence-aware data reconstruction. Specifically, we propose a parameter-efficient multi-decoder Ensemble SRN (E-SRN) architecture consisting of a shared feature grid with multiple lightweight multi-layer perceptron decoders. E-SRN can generate a set of plausible predictions for a given input coordinate to compute the mean as the ensemble prediction and the variance as a confidence score. The voxel-wise variance can be rendered along with the data to inform the reconstruction quality, or be integrated into uncertainty-aware volume visualization algorithms. To prevent the misalignment between the quantified variance and the prediction quality, we propose a novel variance regularization loss for ensemble learning that promotes the Regularized Ensemble SRN (RE-SRN) to obtain a more reliable variance that correlates closely to the true model error. We comprehensively evaluate the quality of variance quantification and data reconstruction of Monte Carlo Dropout (MCD), Mean Field Variational Inference (MFVI), Deep Ensemble (DE), and Predicting Variance (PV) in comparison with our proposed E-SRN and RE-SRN applied to state-of-the-art feature grid SRNs across diverse scalar field datasets. We demonstrate that RE-SRN attains the most accurate data reconstruction and competitive variance-error correlation among uncertain SRNs under the same neural network parameter budgets. Furthermore, we present an adaptation of uncertainty-aware volume rendering and shed light on the potential of incorporating uncertain predictions in improving the quality of volume rendering for uncertain SRNs. Through ablation studies on the regularization strength and ensemble size, we show that E-SRN and RE-SRN are expected to perform sufficiently well with a default configuration without requiring customized hyperparameter settings for different datasets.","authors":[{"affiliations":["The Ohio State University, Columbus, United States"],"email":"xiong.336@osu.edu","is_corresponding":true,"name":"Tianyu Xiong"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"wurster.18@osu.edu","is_corresponding":false,"name":"Skylar Wolfgang Wurster"},{"affiliations":["The Ohio State University, Columbus, United States","Argonne National Laboratory, Lemont, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"tpeterka@mcs.anl.gov","is_corresponding":false,"name":"Tom Peterka"},{"affiliations":["The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1866","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1874","abstract":"A layered network is an important category of graph in which every node is assigned to a layer and layers are drawn as parallel or radial lines. They are commonly used to display temporal data or hierarchical networks. Previous research has demonstrated that minimizing edge crossings is the most important criterion to consider when looking to improve the readability of such networks. While heuristic approaches exist for crossing minimization, we are interested in optimal approaches to the problem that prioritize human readability over computational scalability. We aim to improve the usefulness and applicability of such optimal methods by understanding and improving their scalability to larger graphs. This paper categorizes and evaluates the state-of-the-art linear programming formulations for exact crossing minimization and describes nine new and existing techniques that could plausibly accelerate the optimization algorithm. Through a computational evaluation, we explore each technique's effect on calculation time and how the techniques assist or inhibit one another, allowing researchers and practitioners to adapt them to the characteristics of their networks. Our best-performing techniques yielded a median improvement of 2.5--17x depending on the solver used, giving us the capability to create optimal layouts faster and for larger networks. We provide an open-source implementation of our methodology in Python, where users can pick which combination of techniques to enable according to their use case. A free copy of this paper and all supplemental materials, datasets used, and source code are available at {https://osf.io/}.","authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"wilson.conn@northeastern.edu","is_corresponding":true,"name":"Connor Wilson"},{"affiliations":["Northeastern University, Boston, United States"],"email":"eduardopuertac@gmail.com","is_corresponding":false,"name":"Eduardo Puerta"},{"affiliations":["northeastern university, Boston, United States"],"email":"turokhunter@gmail.com","is_corresponding":false,"name":"Tarik Crnovrsanin"},{"affiliations":["University of Konstanz, Konstanz, Germany","Northeastern University, Boston, United States"],"email":"sara.di-bartolomeo@uni-konstanz.de","is_corresponding":false,"name":"Sara Di Bartolomeo"},{"affiliations":["Northeastern University, Boston, United States"],"email":"c.dunne@northeastern.edu","is_corresponding":false,"name":"Cody Dunne"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1874","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1880","abstract":"Merge trees are a valuable tool in scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the merge tree neural networks (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how graph neural networks (GNNs), which emerged as an effective encoder for graphs, can be trained to produce embeddings of merge trees in vector spaces that enable efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100x on the benchmark datasets while maintaining an error rate below 0.1%.","authors":[{"affiliations":["Tulane University, New Orleans, United States"],"email":"yqin2@tulane.edu","is_corresponding":true,"name":"Yu Qin"},{"affiliations":["Montana State University, Bozeman, United States"],"email":"brittany.fasy@montana.edu","is_corresponding":false,"name":"Brittany Terese Fasy"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"cwenk@tulane.edu","is_corresponding":false,"name":"Carola Wenk"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"bsumma@tulane.edu","is_corresponding":false,"name":"Brian Summa"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1880","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Rapid and Precise Topological Comparison with Merge Tree Neural Networks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1917","abstract":"The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefits to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifier users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically \u201csee\u201d the chart content. They ascribed these challenges mainly to the magnification-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method significantly improved the usability of charts over both the status quo screen magnifier and a state-of-the-art space compaction-based solution.","authors":[{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"yprak001@odu.edu","is_corresponding":true,"name":"Yash Prakash"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"pkhan002@odu.edu","is_corresponding":false,"name":"Pathan Aseef Khan"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"anaya001@odu.edu","is_corresponding":false,"name":"Akshay Kolgar Nayak"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"uksjayarathna@gmail.com","is_corresponding":false,"name":"Sampath Jayarathna"},{"affiliations":["Michigan State University, East Lansing, United States"],"email":"leehaena@msu.edu","is_corresponding":false,"name":"Hae-Na Lee"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"vganjigu@odu.edu","is_corresponding":false,"name":"Vikas Ashok"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1917","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Towards Enhancing Low Vision Usability of Data Charts on Smartphones","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233299602","abstract":"Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3299602","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233299602","image_caption":"","keywords":["Immersive Analytics, Data Transformation, Data Science, Interaction, Empirical Study, Virtual/Augmented/Mixed Reality"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233310019","abstract":"The dynamic network visualization design space consists of two major dimensions: network structural and temporal representation. As more techniques are developed and published, a clear need for evaluation and experimental comparisons between them emerges. Most studies explore the temporal dimension and diverse interaction techniques supporting the participants, focusing on a single structural representation. Empirical evidence about performance and preference for different visualization approaches is scattered over different studies, experimental settings, and tasks. This paper aims to comprehensively investigate the dynamic network visualization design space in two evaluations. First, a controlled study assessing participants' response times, accuracy, and preferences for different combinations of network structural and temporal representations on typical dynamic network exploration tasks, with and without the support of standard interaction methods. Second, the best-performing combinations from the first study are enhanced based on participants' feedback and evaluated in a heuristic-based qualitative study with visualization experts on a real-world network. Our results highlight node-link with animation and playback controls as the best-performing combination and the most preferred based on ratings. Matrices achieve similar performance to node-link in the first study but have considerably lower scores in our second evaluation. Similarly, juxtaposition exhibits evident scalability issues in more realistic analysis contexts.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3310019","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233310019","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"On Network Structural and Temporal Encodings: A Space and Time Odyssey","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233302308","abstract":"We visualize the predictions of multiple machine learning models to help biologists as they interactively make decisions about cell lineage---the development of a (plant) embryo from a single ovum cell. Based on a confocal microscopy dataset, traditionally biologists manually constructed the cell lineage, starting from this observation and reasoning backward in time to establish their inheritance. To speed up this tedious process, we make use of machine learning (ML) models trained on a database of manually established cell lineages to assist the biologist in cell assignment. Most biologists, however, are not familiar with ML, nor is it clear to them which model best predicts the embryo's development. We thus have developed a visualization system that is designed to support biologists in exploring and comparing ML models, checking the model predictions, detecting possible ML model mistakes, and deciding on the most likely embryo development. To evaluate our proposed system, we deployed our interface with six biologists in an observational study. Our results show that the visual representations of machine learning are easily understandable, and our tool, LineageD+, could potentially increase biologists' working efficiency and enhance the understanding of embryos.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3302308","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233302308","image_caption":"","keywords":["Visualization, visual analytics, machine learning, comparing ML predictions, human-AI teaming, plant biology, cell lineage"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233275925","abstract":"A contiguous area cartogram is a geographic map in which the area of each region is proportional to numerical data (e.g., population size) while keeping neighboring regions connected. In this study, we investigated whether value-to-area legends (square symbols next to the values represented by the squares' areas) and grid lines aid map readers in making better area judgments. We conducted an experiment to determine the accuracy, speed, and confidence with which readers infer numerical data values for the mapped regions. We found that, when only informed about the total numerical value represented by the whole cartogram without any legend, the distribution of estimates for individual regions was centered near the true value with substantial spread. Legends with grid lines significantly reduced the spread but led to a tendency to underestimate the values. Comparing differences between regions or between cartograms revealed that legends and grid lines slowed the estimation without improving accuracy. However, participants were more likely to complete the tasks when legends and grid lines were present, particularly when the area units represented by these features could be interactively selected. We recommend considering the cartogram's use case and purpose before deciding whether to include grid lines or an interactive legend.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3275925","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233275925","image_caption":"","keywords":["Task Analysis, Symbols, Data Visualization, Sociology, Visualization, Switches, Mice, Cartogram, Geovisualization, Interactive Data Exploration, Quantitative Evaluation"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233289292","abstract":"Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3289292","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233289292","image_caption":"","keywords":["comparison, perception, visual grouping, bar charts, verbal conclusions."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233316469","abstract":"Automated visualization recommendation facilitates the rapid creation of effective visualizations, which is especially beneficial for users with limited time and limited knowledge of data visualization. There is an increasing trend in leveraging machine learning (ML) techniques to achieve an end-to-end visualization recommendation. However, existing ML-based approaches implicitly assume that there is only one appropriate visualization for a specific dataset, which is often not true for real applications. Also, they often work like a black box, and are difficult for users to understand the reasons for recommending specific visualizations. To fill the research gap, we propose AdaVis, an adaptive and explainable approach to recommend one or multiple appropriate visualizations for a tabular dataset. It leverages a box embedding-based knowledge graph to well model the possible one-to-many mapping relations among different entities (i.e., data features, dataset columns, datasets, and visualization choices). The embeddings of the entities and relations can be learned from dataset-visualization pairs. Also, AdaVis incorporates the attention mechanism into the inference framework. Attention can indicate the relative importance of data features for a dataset and provide fine-grained explainability. Our extensive evaluations through quantitative metric evaluations, case studies, and user interviews demonstrate the effectiveness of AdaVis.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3316469","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233316469","image_caption":"","keywords":["Visualization Recommendation, Logical Reasoning, Data Visualization, Knowledge Graph"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233322372","abstract":"Visualization linting is a proven effective tool in assisting users to follow established visualization guidelines. Despite its success, visualization linting for choropleth maps, one of the most popular visualizations on the internet, has yet to be investigated. In this paper, we present GeoLinter, a linting framework for choropleth maps that assists in creating accurate and robust maps. Based on a set of design guidelines and metrics drawing upon a collection of best practices from the cartographic literature, GeoLinter detects potentially suboptimal design decisions and provides further recommendations on design improvement with explanations at each step of the design process. We perform a validation study to evaluate the proposed framework's functionality with respect to identifying and fixing errors and apply its results to improve the robustness of GeoLinter. Finally, we demonstrate the effectiveness of the GeoLinter - validated through empirical studies - by applying it to a series of case studies using real-world datasets.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3322372","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233322372","image_caption":"","keywords":["Data visualization , Image color analysis , Geology , Recommender systems , Guidelines , Bars , Visualization Author Keywords: Automated visualization design , choropleth maps , visualization linting , visualization recommendation"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"GeoLinter: A Linting Framework for Choropleth Maps","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233326698","abstract":"Researchers have derived many theoretical models for specifying users\u2019 insights as they interact with a visualization system. These representations are essential for understanding the insight discovery process, such as when inferring user interaction patterns that lead to insight or assessing the rigor of reported insights. However, theoretical models can be difficult to apply to existing tools and user studies, often due to discrepancies in how insight and its constituent parts are defined. This paper calls attention to the consistent structures that recur across the visualization literature and describes how they connect multiple theoretical representations of insight. We synthesize a unified formalism for insights using these structures, enabling a wider audience of researchers and developers to adopt the corresponding models. Through a series of theoretical case studies, we use our formalism to compare and contrast existing theories, revealing interesting research challenges in reasoning about a user's domain knowledge and leveraging synergistic approaches in data mining and data management research.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3326698","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233326698","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"What Do We Mean When We Say \u201cInsight\u201d? A Formal Synthesis of Existing Theory","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233330262","abstract":"This paper presents a computational framework for the concise encoding of an ensemble of persistence diagrams, in the form of weighted Wasserstein barycenters [100], [102] of a dictionary of atom diagrams. We introduce a multi-scale gradient descent approach for the efficient resolution of the corresponding minimization problem, which interleaves the optimization of the barycenter weights with the optimization of the atom diagrams. Our approach leverages the analytic expressions for the gradient of both sub-problems to ensure fast iterations and it additionally exploits shared-memory parallelism. Extensive experiments on public ensembles demonstrate the efficiency of our approach, with Wasserstein dictionary computations in the orders of minutes for the largest examples. We show the utility of our contributions in two applications. First, we apply Wassserstein dictionaries to data reduction and reliably compress persistence diagrams by concisely representing them with their weights in the dictionary. Second, we present a dimensionality reduction framework based on a Wasserstein dictionary defined with a small number of atoms (typically three) and encode the dictionary as a low dimensional simplex embedded in a visual space (typically in 2D). In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3330262","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233330262","image_caption":"","keywords":["Topological data analysis, ensemble data, persistence diagrams"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Wasserstein Dictionaries of Persistence Diagrams","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233332511","abstract":"We present Submerse, an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3332511","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233332511","image_caption":"","keywords":["Camera navigation, flooding simulation visualization, immersive visualization, mixed reality"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233337173","abstract":"Visualization design studies bring together visualization researchers and domain experts to address yet unsolved data analysis challenges stemming from the needs of the domain experts. Typically, the visualization researchers lead the design study process and implementation of any visualization solutions. This setup leverages the visualization researchers' knowledge of methodology, design, and programming, but the availability to synchronize with the domain experts can hamper the design process. We consider an alternative setup where the domain experts take the lead in the design study, supported by the visualization experts. In this study, the domain experts are computer architecture experts who simulate and analyze novel computer chip designs. These chips rely on a Network-on-Chip (NOC) to connect components. The experts want to understand how the chip designs perform and what in the design led to their performance. To aid this analysis, we develop Vis4Mesh, a visualization system that provides spatial, temporal, and architectural context to simulated NOC behavior. Integration with an existing computer architecture visualization tool enables architects to perform deep-dives into specific architecture component behavior. We validate Vis4Mesh through a case study and a user study with computer architecture researchers. We reflect on our design and process, discussing advantages, disadvantages, and guidance for engaging in a domain expert-led design studies.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3337173","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233337173","image_caption":"","keywords":["Data Visualization, Design Study, Network-on-Chip, Performance Analysis"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233322898","abstract":"Visual and interactive machine learning systems (IML) are becoming ubiquitous as they empower individuals with varied machine learning expertise to analyze data. However, it remains complex to align interactions with visual marks to a user\u2019s intent for steering machine learning models. We explore using data and visual design probes to elicit users\u2019 desired interactions to steer ML models via visual encodings within IML interfaces. We conducted an elicitation study with 20 data analysts with varying expertise in ML. We summarize our findings as pairs of target-interaction, which we compare to prior systems to assess the utility of the probes. We additionally surfaced insights about factors influencing how and why participants chose to interact with visual encodings, including refraining from interacting. Finally, we reflect on the value of gathering such formative empirical evidence via data and visual design probes ahead of developing IML prototypes. ","authors":[],"award":"","doi":"10.1109/TVCG.2023.3322898","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233322898","image_caption":"","keywords":["Design Probes, Interactive Machine Learning, Model Steering, Semantic Interaction"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Eliciting Model Steering Interactions from Users via Data and Visual Design Probes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233338451","abstract":"This paper investigates the role of text in visualizations, specifically the impact of text position, semantic content, and biased wording. Two empirical studies were conducted based on two tasks (predicting data trends and appraising bias) using two visualization types (bar and line charts). While the addition of text had a minimal effect on how people perceive data trends, there was a significant impact on how biased they perceive the authors to be. This finding revealed a relationship between the degree of bias in textual information and the perception of the authors' bias. Exploratory analyses support an interaction between a person's prediction and the degree of bias they perceived. This paper also develops a crowdsourced method for creating chart annotations that range from neutral to highly biased. This research highlights the need for designers to mitigate potential polarization of readers' opinions based on how authors' ideas are expressed.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3338451","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233338451","image_caption":"","keywords":["Visualization, text, annotation, perceived bias, judgment, prediction"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233337642","abstract":"Molecular docking is a key technique in various fields like structural biology, medicinal chemistry, and biotechnology. It is widely used for virtual screening during drug discovery, computer-assisted drug design, and protein engineering. A general molecular docking process consists of the target and ligand selection, their preparation, and the docking process itself, followed by the evaluation of the results. However, the most commonly used docking software provides no or very basic evaluation possibilities. Scripting and external molecular viewers are often used, which are not designed for an efficient analysis of docking results. Therefore, we developed InVADo, a comprehensive interactive visual analysis tool for large docking data. It consists of multiple linked 2D and 3D views. It filters and spatially clusters the data, and enriches it with post-docking analysis results of protein-ligand interactions and functional groups, to enable well-founded decision-making. In an exemplary case study, domain experts confirmed that InVADo facilitates and accelerates the analysis workflow. They rated it as a convenient, comprehensive, and feature-rich tool, especially useful for virtual screening.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3337642","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233337642","image_caption":"","keywords":["Molecular Docking, AutoDock, Virtual Screening, Visual Analysis, Visualization, Clustering, Protein-Ligand Interaction."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"InVADo: Interactive Visual Analysis of Molecular Docking Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233345373","abstract":"Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for compressing large-scale time-varying data. The approach consists of two stages: spatial compression and model aggregation. In the first stage, each time step is compressed using an implicit neural representation with bottleneck layers and features of interest preservation-based sampling. In the second stage, we utilize an offline knowledge distillation algorithm to extract knowledge from the trained models and aggregate it into a single model. We evaluated our approach on a variety of time-varying volumetric data sets. Both quantitative and qualitative results, such as PSNR, LPIPS, and rendered images, demonstrate that KD-INR surpasses the state-of-the-art approaches, including learning-based (i.e., CoordNet, NeurComp, and SIREN) and lossy compression (i.e., SZ3, ZFP, and TTHRESH) methods, at various compression ratios ranging from hundreds to ten thousand.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3345373","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233345373","image_caption":"","keywords":["Time-varying data compression, implicit neural representation, knowledge distillation, volume visualization."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233332999","abstract":"Quantum computing offers significant speedup compared to classical computing, which has led to a growing interest among users in learning and applying quantum computing across various applications. However, quantum circuits, which are fundamental for implementing quantum algorithms, can be challenging for users to understand due to their underlying logic, such as the temporal evolution of quantum states and the effect of quantum amplitudes on the probability of basis quantum states. To fill this research gap, we propose QuantumEyes, an interactive visual analytics system to enhance the interpretability of quantum circuits through both global and local levels. For the global-level analysis, we present three coupled visualizations to delineate the changes of quantum states and the underlying reasons: a Probability Summary View to overview the probability evolution of quantum states; a State Evolution View to enable an in-depth analysis of the influence of quantum gates on the quantum states; a Gate Explanation View to show the individual qubit states and facilitate a better understanding of the effect of quantum gates. For the local-level analysis, we design a novel geometrical visualization dandelion chart to explicitly reveal how the quantum amplitudes affect the probability of the quantum state. We thoroughly evaluated QuantumEyes as well as the novel dandelion chart integrated into it through two case studies on different types of quantum algorithms and in-depth expert interviews with 12 domain experts. The results demonstrate the effectiveness and usability of our approach in enhancing the interpretability of quantum circuits.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3332999","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233332999","image_caption":"","keywords":["Data visualization, design study, interpretability, quantum computing."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"QuantumEyes: Towards Better Interpretability of Quantum Circuits","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233334755","abstract":"This paper presents a computational framework for the Wasserstein auto-encoding of merge trees (MT-WAE), a novel extension of the classical auto-encoder neural network architecture to the Wasserstein metric space of merge trees. In contrast to traditional auto-encoders which operate on vectorized data, our formulation explicitly manipulates merge trees on their associated metric space at each layer of the network, resulting in superior accuracy and interpretability. Our novel neural network approach can be interpreted as a non-linear generalization of previous linear attempts [79] at merge tree encoding. It also trivially extends to persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our algorithms, with MT-WAE computations in the orders of minutes on average. We show the utility of our contributions in two applications adapted from previous work on merge tree encoding [79]. First, we apply MT-WAE to merge tree compression, by concisely representing them with their coordinates in the final layer of our auto-encoder. Second, we document an application to dimensionality reduction, by exploiting the latent space of our auto-encoder, for the visual analysis of ensemble data. We illustrate the versatility of our framework by introducing two penalty terms, to help preserve in the latent space both the Wasserstein distances between merge trees, as well as their clusters. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used for reproducibility.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3334755","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233334755","image_caption":"","keywords":["Topological data analysis, ensemble data, persistence diagrams, merge trees, auto-encoders, neural networks"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233340770","abstract":"We present VoxAR, a method to facilitate an effective visualization of volume-rendered objects in optical see-through head-mounted displays (OST-HMDs). The potential of augmented reality (AR) to integrate digital information into the physical world provides new opportunities for visualizing and interpreting scientific data. However, a limitation of OST-HMD technology is that rendered pixels of a virtual object can interfere with the colors of the real-world, making it challenging to perceive the augmented virtual information accurately. We address this challenge in a two-step approach. First, VoxAR determines an appropriate placement of the volume-rendered object in the real-world scene by evaluating a set of spatial and environmental objectives, managed as user-selected preferences and pre-defined constraints. We achieve a real-time solution by implementing the objectives using a GPU shader language. Next, VoxAR adjusts the colors of the input transfer function (TF) based on the real-world placement region. Specifically, we introduce a novel optimization method that adjusts the TF colors such that the resulting volume-rendered pixels are discernible against the background and the TF maintains the perceptual mapping between the colors and data intensity values. Finally, we present an assessment of our approach through objective evaluations and subjective user studies.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3340770","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233340770","image_caption":"","keywords":["Adaptive Visualization, Situated Visualization, Augmented Reality, Volume Rendering"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233345340","abstract":"Label quality issues, such as noisy labels and imbalanced class distributions, have negative effects on model performance. Automatic reweighting methods identify problematic samples with label quality issues by recognizing their negative effects on validation samples and assigning lower weights to them. However, these methods fail to achieve satisfactory performance when the validation samples are of low quality. To tackle this, we develop Reweighter, a visual analysis tool for sample reweighting. The reweighting relationships between validation samples and training samples are modeled as a bipartite graph. Based on this graph, a validation sample improvement method is developed to improve the quality of validation samples. Since the automatic improvement may not always be perfect, a co-cluster-based bipartite graph visualization is developed to illustrate the reweighting relationships and support the interactive adjustments to validation samples and reweighting results. The adjustments are converted into the constraints of the validation sample improvement method to further improve validation samples. We demonstrate the effectiveness of Reweighter in improving reweighting results through quantitative evaluation and two case studies.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3345340","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233345340","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Interactive Reweighting for Mitigating Label Quality Issues","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233323150","abstract":"We examined user preferences to combine multiple interaction modalities for collaborative interaction with data shown on large vertical displays. Large vertical displays facilitate visual data exploration and allow the use of diverse interaction modalities by multiple users at different distances from the screen. Yet, how to offer multiple interaction modalities is a non-trivial problem. We conducted an elicitation study with 20 participants that generated 1015 interaction proposals combining touch, speech, pen, and mid-air gestures. Given the opportunity to interact using these four 13 modalities, participants preferred speech interaction in 10 of 15 14 low-level tasks and direct manipulation for straightforward tasks 15 such as showing a tooltip or selecting. In contrast to previous work, 16 participants most favored unimodal and personal interactions. We 17 identified what we call collaborative synonyms among their interaction proposals and found that pairs of users collaborated either unimodally and simultaneously or multimodally and sequentially. We provide insights into how end-users associate visual exploration tasks with certain modalities and how they collaborate at different interaction distances using specific interaction modalities. The supplemental material is available at https://osf.io/m8zuh.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3323150","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233323150","image_caption":"","keywords":["Multimodal interaction, collaborative work, large vertical displays, elicitation study, spatio-temporal data"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233334513","abstract":"Data integration is often performed to consolidate information from multiple disparate data sources during visual data analysis. However, integration operations are usually separate from visual analytics operations such as encode and filter in both interface design and empirical research. We conducted a preliminary user study to investigate whether and how data integration should be incorporated directly into the visual analytics process. We used two interface alternatives featuring contrasting approaches to the data preparation and analysis workflow: manual file-based ex-situ integration as a separate step from visual analytics operations; and automatic UI-based in-situ integration merged with visual analytics operations. Participants were asked to complete specific and free-form tasks with each interface, browsing for patterns, generating insights, and summarizing relationships between attributes distributed across multiple files. Analyzing participants' interactions and feedback, we found both task completion time and total interactions to be similar across interfaces and tasks, as well as unique integration strategies between interfaces and emergent behaviors related to satisficing and cognitive bias. Participants' time spent and interactions revealed that in-situ integration enabled users to spend more time on analysis tasks compared with ex-situ integration. Participants' integration strategies and analytical behaviors revealed differences in interface usage for generating and tracking hypotheses and insights. With these results, we synthesized preliminary guidelines for designing future visual analytics interfaces that can support integrating attributes throughout an active analysis process.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3334513","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233334513","image_caption":"","keywords":["Visual analytics, Data integration, User interface design, Integration strategies, Analytical behaviors."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Preliminary Guidelines For Combining Data Integration and Visual Data Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233341990","abstract":"We report on challenges and considerations for supporting design processes for visualizations in motion embedded in sports videos. We derive our insights from analyzing swimming race visualizations and motion-related data, building a technology probe, as well as a study with designers. Understanding how to design situated visualizations in motion is important for a variety of contexts. Competitive sports coverage, in particular, increasingly includes information on athlete or team statistics and records. Although moving visual representations attached to athletes or other targets are starting to appear, systematic investigations on how to best support their design process in the context of sports videos are still missing. Our work makes several contributions in identifying opportunities for visualizations to be added to swimming competition coverage but, most importantly, in identifying requirements and challenges for designing situated visualizations in motion. Our investigations include the analysis of a survey with swimming enthusiasts on their motion-related information needs, an ideation workshop to collect designs and elicit design challenges, the design of a technology probe that allows to create embedded visualizations in motion based on real data, and an evaluation with visualization designers that aimed to understand the benefits of designing directly on videos.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3341990","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233341990","image_caption":"","keywords":["Data visualization, Sports, Videos, Probes, Surveys, Authoring systems, Games, Design framework, Embedded visualization, Sports analytics, Visualization in motion"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233346713","abstract":"Recent growth in the popularity of large language models has led to their increased usage for summarizing, predicting, and generating text, making it vital to help researchers and engineers understand how and why they work. We present KnowledgeVIS , a human-in-the-loop visual analytics system for interpreting language models using fill-in-the-blank sentences as prompts. By comparing predictions between sentences, KnowledgeVIS reveals learned associations that intuitively connect what language models learn during training to natural language tasks downstream, helping users create and test multiple prompt variations, analyze predicted words using a novel semantic clustering technique, and discover insights using interactive visualizations. Collectively, these visualizations help users identify the likelihood and uniqueness of individual predictions, compare sets of predictions between prompts, and summarize patterns and relationships between predictions across all prompts. We demonstrate the capabilities of KnowledgeVIS with feedback from six NLP experts as well as three different use cases: (1) probing biomedical knowledge in two domain-adapted models; and (2) evaluating harmful identity stereotypes and (3) discovering facts and relationships between three general-purpose models.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3346713","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233346713","image_caption":"","keywords":["Visual analytics, language models, prompting, interpretability, machine learning."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243350076","abstract":"Ensembles of contours arise in various applications like simulation, computer-aided design, and semantic segmentation. Uncovering ensemble patterns and analyzing individual members is a challenging task that suffers from clutter. Ensemble statistical summarization can alleviate this issue by permitting analyzing ensembles' distributional components like the mean and median, confidence intervals, and outliers. Contour boxplots, powered by Contour Band Depth (CBD), are a popular non-parametric ensemble summarization method that benefits from CBD's generality, robustness, and theoretical properties. In this work, we introduce Inclusion Depth (ID), a new notion of contour depth with three defining characteristics. First, ID is a generalization of functional Half-Region Depth, which offers several theoretical guarantees. Second, ID relies on a simple principle: the inside/outside relationships between contours. This facilitates implementing ID and understanding its results. Third, the computational complexity of ID scales quadratically in the number of members of the ensemble, improving CBD's cubic complexity. This also in practice speeds up the computation enabling the use of ID for exploring large contour ensembles or in contexts requiring multiple depth evaluations like clustering. In a series of experiments on synthetic data and case studies with meteorological and segmentation data, we evaluate ID's performance and demonstrate its capabilities for the visual analysis of contour ensembles.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3350076","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243350076","image_caption":"","keywords":["Uncertainty visualization, contours, ensemble summarization, depth statistics."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Inclusion Depth for Contour Ensembles","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243354561","abstract":"Interactive visualization can support fluid exploration but is often limited to predetermined tasks. Scripting can support a vast range of queries but may be more cumbersome for free-form exploration. Embedding interactive visualization in scripting environments, such as computational notebooks, provides an opportunity to leverage the strengths of both direct manipulation and scripting. We investigate interactive visualization design methodology, choices, and strategies under this paradigm through a design study of calling context trees used in performance analysis, a field which exemplifies typical exploratory data analysis workflows with big data and hard to define problems. We first produce a formal task analysis assigning tasks to graphical or scripting contexts based on their specificity, frequency, and suitability. We then design a notebook-embedded interactive visualization and validate it with intended users. In a follow-up study, we present participants with multiple graphical and scripting interaction modes to elicit feedback about notebook-embedded visualization design, finding consensus in support of the interaction model. We report and reflect on observations regarding the process and design implications for combining visualization and scripting in notebooks.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3354561","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243354561","image_caption":"","keywords":["Exploratory Data Analysis, Interactive Data Analysis, Computational Notebooks, Hybrid Visualization-Scripting, Visualization Design"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243355884","abstract":"News articles containing data visualizations play an important role in informing the public on issues ranging from public health to politics. Recent research on the persuasive appeal of data visualizations suggests that prior attitudes can be notoriously difficult to change. Inspired by an NYT article, we designed two experiments to evaluate the impact of elicitation and contrasting narratives on attitude change, recall, and engagement. We hypothesized that eliciting prior beliefs leads to more elaborative thinking that ultimately results in higher attitude change, better recall, and engagement. Our findings revealed that visual elicitation leads to higher engagement in terms of feelings of surprise. While there is an overall attitude change across all experiment conditions, we did not observe a significant effect of belief elicitation on attitude change. With regard to recall error, while participants in the draw trend elicitation exhibited significantly lower recall error than participants in the categorize trend condition, we found no significant difference in recall error when comparing elicitation conditions to no elicitation. In a follow-up study, we added contrasting narratives with the purpose of making the main visualization (communicating data on the focal issue) appear strikingly different. Compared to the results of Study 1, we found that contrasting narratives improved engagement in terms of surprise and interest but interestingly resulted in higher recall error and no significant change in attitude. We discuss the effects of elicitation and contrasting narratives in the context of topic involvement and the strengths of temporal trends encoded in the data visualization.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3355884","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243355884","image_caption":"","keywords":["Data Visualization, Market Research, Visualization, Uncertainty, Data Models, Correlation, Attitude Control, Belief Elicitation, Visual Elicitation, Data Visualization, Contrasting Narratives"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233337396","abstract":"Partitioning a dynamic network into subsets (i.e., snapshots) based on disjoint time intervals is a widely used technique for understanding how structural patterns of the network evolve. However, selecting an appropriate time window (i.e., slicing a dynamic network into snapshots) is challenging and time-consuming, often involving a trial-and-error approach to investigating underlying structural patterns. To address this challenge, we present MoNetExplorer, a novel interactive visual analytics system that leverages temporal network motifs to provide recommendations for window sizes and support users in visually comparing different slicing results. MoNetExplorer provides a comprehensive analysis based on window size, including (1) a temporal overview to identify the structural information, (2) temporal network motif composition, and (3) node-link-diagram-based details to enable users to identify and understand structural patterns at various temporal resolutions. To demonstrate the effectiveness of our system, we conducted a case study with network researchers using two real-world dynamic network datasets. Our case studies show that the system effectively supports users to gain valuable insights into the temporal and structural aspects of dynamic networks.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3337396","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233337396","image_caption":"","keywords":["Visual analytics, Measurement, Size measurement, Windows, Time measurement, Data visualization, Task analysis, Visual analytics, Dynamic networks, Temporal network motifs, Interactive network slicing"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243358919","abstract":"We conduct two in-lab experiments (N=93) to evaluate the effectiveness of Gantt charts, extended Gantt charts, and stringline charts for visualizing fixed-order event sequence data. We first formulate five types of event sequences and define three types of sequence elements: point events, interval events, and the temporal gaps between them. Our two experiments focus on event sequences with a pre-defined, fixed order, and measure task error rates and completion time. The first experiment shows single sequences and assesses the three charts' performance in comparing event duration or gap. The second experiment shows multiple sequences and evaluates how well the charts reveal temporal patterns. The results suggest that when visualizing single fixed-order event sequences, 1) Gantt and extended Gantt charts lead to comparable error rates in the duration-comparing task; 2) Gantt charts exhibit either shorter or equal completion time than extended Gantt charts; 3) both Gantt and extended Gantt charts demonstrate shorter completion times than stringline charts; 4) however, stringline charts outperform the other two charts with fewer errors in the comparing task when event type counts are high. Additionally, when visualizing multiple point-based fixed-order event sequences, stringline charts require less time than Gantt charts for people to find temporal patterns. Based on these findings, we discuss design opportunities for visualizing fixed-order event sequences and discuss future avenues for optimizing these charts.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3358919","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243358919","image_caption":"","keywords":["Gantt chart, stringline chart, Marey's graph, event sequence, empirical study"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243364388","abstract":"Seasonal-trend decomposition based on loess (STL) is a powerful tool to explore time series data visually. In this paper, we present an extension of STL to uncertain data, named uncertainty-aware STL (UASTL). Our method propagates multivariate Gaussian distributions mathematically exactly through the entire analysis and visualization pipeline. Thereby, stochastic quantities shared between the components of the decomposition are preserved. Moreover, we present application scenarios with uncertainty modeling based on Gaussian processes, e.g., data with uncertain areas or missing values. Besides these mathematical results and modeling aspects, we introduce visualization techniques that address the challenges of uncertainty visualization and the problem of visualizing highly correlated components of a decomposition. The global uncertainty propagation enables the time series visualization with STL-consistent samples, the exploration of correlation between and within decomposition's components, and the analysis of the impact of varying uncertainty. Finally, we show the usefulness of UASTL and the importance of uncertainty visualization with several examples. Thereby, a comparison with conventional STL is performed.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3364388","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243364388","image_caption":"","keywords":["- I.6.9.g Visualization techniques and methodologies < I.6.9 Visualization < I.6 Simulation, Modeling, and Visualization < I Compu - G.3 Probability and Statistics < G Mathematics of Computing - G.3.n Statistical computing < G.3 Probability and Statistics < G Mathematics of Computing - G.3.p Stochastic processes < G.3 Probability and Statistics < G Mathematics of Computing"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243364841","abstract":"The need to understand the structure of hierarchical or high-dimensional data is present in a variety of fields. Hyperbolic spaces have proven to be an important tool for embedding computations and analysis tasks as their non-linear nature lends itself well to tree or graph data. Subsequently, they have also been used in the visualization of high-dimensional data, where they exhibit increased embedding performance. However, none of the existing dimensionality reduction methods for embedding into hyperbolic spaces scale well with the size of the input data. That is because the embeddings are computed via iterative optimization schemes and the computation cost of every iteration is quadratic in the size of the input. Furthermore, due to the non-linear nature of hyperbolic spaces, Euclidean acceleration structures cannot directly be translated to the hyperbolic setting. This paper introduces the first acceleration structure for hyperbolic embeddings, building upon a polar quadtree. We compare our approach with existing methods and demonstrate that it computes embeddings of similar quality in significantly less time. Implementation and scripts for the experiments can be found at this https URL.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3364841","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243364841","image_caption":"","keywords":["Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Quantitative Methods (q-bio.QM); Machine Learning (stat.ML) Dimensionality reduction, t-SNE, hyperbolic embedding, acceleration structure"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Accelerating hyperbolic t-SNE","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243365089","abstract":"Implicit Neural representations (INRs) are widely used for scientific data reduction and visualization by modeling the function that maps a spatial location to a data value. Without any prior knowledge about the spatial distribution of values, we are forced to sample densely from INRs to perform visualization tasks like iso-surface extraction which can be very computationally expensive. Recently, range analysis has shown promising results in improving the efficiency of geometric queries, such as ray casting and hierarchical mesh extraction, on INRs for 3D geometries by using arithmetic rules to bound the output range of the network within a spatial region. However, the analysis bounds are often too conservative for complex scientific data. In this paper, we present an improved technique for range analysis by revisiting the arithmetic rules and analyzing the probability distribution of the network output within a spatial region. We model this distribution efficiently as a Gaussian distribution by applying the central limit theorem. Excluding low probability values, we are able to tighten the output bounds, resulting in a more accurate estimation of the value range, and hence more accurate identification of iso-surface cells and more efficient iso-surface extraction on INRs. Our approach demonstrates superior performance in terms of the iso-surface extraction time on four datasets compared to the original range analysis method and can also be generalized to other geometric query tasks.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3365089","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243365089","image_caption":"","keywords":["Iso-surface extraction, implicit neural representation, uncertainty propagation, affine arithmetic."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233346641","abstract":"Currently, growing data sources and long-running algorithms impede user attention and interaction with visual analytics applications. Progressive visualization (PV) and visual analytics (PVA) alleviate this problem by allowing immediate feedback and interaction with large datasets and complex computations, avoiding waiting for complete results by using partial results improving with time. Yet, creating a progressive visualization requires more effort than a regular visualization but also opens up new possibilities, such as steering the computations towards more relevant parts of the data, thus saving computational resources. However, there is currently no comprehensive overview of the design space for progressive visualization systems. We surveyed the related work of PV and derived a new taxonomy for progressive visualizations by systematically categorizing all PV publications that included visualizations with progressive features. Progressive visualizations can be categorized by well-known visualization taxonomies, but we also found that progressive visualizations can be distinguished by the way they manage their data processing, data domain, and visual update. Furthermore, we identified key properties such as uncertainty, steering, visual stability, and real-time processing that are significantly different with progressive applications. We also collected evaluation methodologies reported by the publications and conclude with statistical findings, research gaps, and open challenges. A continuously updated visual browser of the survey data is available at visualsurvey.net/pva.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3346641","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233346641","image_caption":"","keywords":["Data visualization, Convergence, Visual analytics, Taxonomy Surveys, Rendering (computer graphics), Task analysis, Progressive Visual Analytics, Progressive Visualization, Taxonomy, State-of-the-Art Report, Survey"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"A Survey on Progressive Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233287585","abstract":"Data visualization and journalism are deeply connected. From early infographics to recent data-driven storytelling, visualization has become an integrated part of contemporary journalism, primarily as a communication artifact to inform the general public. Data journalism, harnessing the power of data visualization, has emerged as a bridge between the growing volume of data and our society. Visualization research that centers around data storytelling has sought to understand and facilitate such journalistic endeavors. However, a recent metamorphosis in journalism has brought broader challenges and opportunities that extend beyond mere communication of data. We present this article to enhance our understanding of such transformations and thus broaden visualization research's scope and practical contribution to this evolving field. We first survey recent significant shifts, emerging challenges, and computational practices in journalism. We then summarize six roles of computing in journalism and their implications. Based on these implications, we provide propositions for visualization research concerning each role. Ultimately, by mapping the roles and propositions onto a proposed ecological model and contextualizing existing visualization research, we surface seven general topics and a series of research agendas that can guide future visualization research at this intersection.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3287585","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233287585","image_caption":"","keywords":["Computational journalism, data visualization, data-driven storytelling, journalism"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243356566","abstract":"The increasing ubiquity of data in everyday life has elevated the importance of data literacy and accessible data representations, particularly for individuals with disabilities. While prior research predominantly focuses on the needs of the visually impaired, our survey aims to broaden this scope by investigating accessible data representations across a more inclusive spectrum of disabilities. After conducting a systematic review of 152 accessible data representation papers from ACM and IEEE databases, we found that roughly 78% of existing articles center on vision impairments. In this paper, we conduct a comprehensive review of the remaining 22% of papers focused on underrepresented disability communities. We developed categorical dimensions based on accessibility, visualization, and human-computer interaction to classify the papers. These dimensions include the community of focus, issues addressed, contribution type, study methods, participants, data type, visualization type, and data domain. Our work redefines accessible data representations by illustrating their application for disabilities beyond those related to vision. Building on our literature review, we identify and discuss opportunities for future research in accessible data representations. All supplemental materials are available at https://osf.io/ yv4xm/?view only=7b36a3fbf7a14b3888029966faa3def9.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3356566","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243356566","image_caption":"","keywords":["Accessibility, Data Representations."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233306356","abstract":"A multitude of studies have been conducted on graph drawing, but many existing methods only focus on optimizing a single aesthetic aspect of graph layouts. There are a few existing methods that attempt to develop a flexible solution for optimizing different aesthetic aspects measured by different aesthetic criteria. Furthermore, thanks to the significant advance in deep learning techniques, several deep learning-based layout methods were proposed recently, which have demonstrated the advantages of the deep learning approaches for graph drawing. However, none of these existing methods can be directly applied to optimizing non-differentiable criteria without special accommodation. In this work, we propose a novel Generative Adversarial Network (GAN) based deep learning framework for graph drawing, called SmartGD, which can optimize any quantitative aesthetic goals even though they are non-differentiable. In the cases where the aesthetic goal is too abstract to be described mathematically, SmartGD can draw graphs in a similar style as a collection of good layout examples, which might be selected by humans based on the abstract aesthetic goal. To demonstrate the effectiveness and efficiency of SmartGD, we conduct experiments on minimizing stress, minimizing edge crossing, maximizing crossing angle, and a combination of multiple aesthetics. Compared with several popular graph drawing algorithms, the experimental results show that SmartGD achieves good performance both quantitatively and qualitatively.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3306356","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233306356","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233346640","abstract":"Is it true that if citizens understand hurricane probabilities, they will make more rational decisions for evacuation? Finding answers to such questions is not straightforward in the literature because the terms \u201c judgment \u201d and \u201c decision making \u201d are often used interchangeably. This terminology conflation leads to a lack of clarity on whether people make suboptimal decisions because of inaccurate judgments of information conveyed in visualizations or because they use alternative yet currently unknown heuristics. To decouple judgment from decision making, we review relevant concepts from the literature and present two preregistered experiments (N=601) to investigate if the task (judgment vs. decision making), the scenario (sports vs. humanitarian), and the visualization (quantile dotplots, density plots, probability bars) affect accuracy. While experiment 1 was inconclusive, we found evidence for a difference in experiment 2. Contrary to our expectations and previous research, which found decisions less accurate than their direct-equivalent judgments, our results pointed in the opposite direction. Our findings further revealed that decisions were less vulnerable to status-quo bias, suggesting decision makers may disfavor responses associated with inaction. We also found that both scenario and visualization types can influence people's judgments and decisions. Although effect sizes are not large and results should be interpreted carefully, we conclude that judgments cannot be safely used as proxy tasks for decision making, and discuss implications for visualization research and beyond. Materials and preregistrations are available at https://osf.io/ufzp5/?view_only=adc0f78a23804c31bf7fdd9385cb264f.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3346640","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233346640","image_caption":"","keywords":["Data visualization, Task analysis, Decision making, Visualization, Bars, Sports, Terminology, Cognition, Decision Making, Judgment, Psychology, Visualization"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Decoupling Judgment and Decision Making: A Tale of Two Tails","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243372620","abstract":"Small multiples are a popular visualization method, displaying different views of a dataset using multiple frames, often with the same scale and axes. However, there is a need to address their potential constraints, especially in the context of human cognitive capacity limits. These limits dictate the maximum information our mind can process at once. We explore the issue of capacity limitation by testing competing theories that describe how the number of frames shown in a display, the scale of the frames, and time constraints impact user performance with small multiples of line charts in an energy grid scenario. In two online studies (Experiment 1 n = 141 and Experiment 2 n = 360) and a follow-up eye-tracking analysis (n=5),we found a linear decline in accuracy with increasing frames across seven tasks, which was not fully explained by differences in frame size, suggesting visual search challenges. Moreover, the studies demonstrate that highlighting specific frames can mitigate some visual search difficulties but, surprisingly, not eliminate them. This research offers insights into optimizing the utility of small multiples by aligning them with human limitations.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3372620","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243372620","image_caption":"","keywords":["Cognition, small multiples, time-series data"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243381453","abstract":"Scatterplots provide a visual representation of bivariate data (or 2D embeddings of multivariate data) that allows for effective analyses of data dependencies, clusters, trends, and outliers. Unfortunately, classical scatterplots suffer from scalability issues, since growing data sizes eventually lead to overplotting and visual clutter on a screen with a fixed resolution, which hinders the data analysis process. We propose an algorithm that compensates for irregular sample distributions by a smooth transformation of the scatterplot's visual domain. Our algorithm evaluates the scatterplot's density distribution to compute a regularization mapping based on integral images of the rasterized density function. The mapping preserves the samples' neighborhood relations. Few regularization iterations suffice to achieve a nearly uniform sample distribution that efficiently uses the available screen space. We further propose approaches to visually convey the transformation that was applied to the scatterplot and compare them in a user study. We present a novel parallel algorithm for fast GPU-based integral-image computation, which allows for integrating our de-cluttering approach into interactive visual data analysis systems.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3381453","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243381453","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"De-cluttering Scatterplots with Integral Images","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243382607","abstract":"Advanced manufacturing creates increasingly complex objects with material compositions that are often difficult to characterize by a single modality. Our domain scientists are going beyond traditional methods by employing both X-ray and neutron computed tomography to obtain complementary representations expected to better resolve material boundaries. However, the use of two modalities creates its own challenges for visualization, requiring either complex adjustments of multimodal transfer functions or the need for multiple views. Together with experts in nondestructive evaluation, we designed a novel interactive multimodal visualization approach to create a combined view of the co-registered X-ray and neutron acquisitions of industrial objects. Using an automatic topological segmentation of the bivariate histogram of X-ray and neutron values as a starting point, the system provides a simple yet effective interface to easily create, explore, and adjust a multimodal isualization. We propose a widget with simple brushing interactions that enables the user to quickly correct the segmented histogram results. Our semiautomated system enables domain experts to intuitively explore large multimodal datasets without the need for either advanced segmentation algorithms or knowledge of visualization echniques. We demonstrate our approach using synthetic examples, industrial phantom objects created to stress multimodal scanning techniques, and real-world objects, and we discuss expert feedback.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3382607","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243382607","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243374571","abstract":"Visualization Recommendation Systems (VRSs) are a novel and challenging field of study aiming to help generate insightful visualizations from data and support non-expert users in information discovery. Among the many contributions proposed in this area, some systems embrace the ambitious objective of imitating human analysts to identify relevant relationships in data and make appropriate design choices to represent these relationships with insightful charts. We denote these systems as \"agnostic\" VRSs since they do not rely on human-provided constraints and rules but try to learn the task autonomously. Despite the high application potential of agnostic VRSs, their progress is hindered by several obstacles, including the absence of standardized datasets to train recommendation algorithms, the difficulty of learning design rules, and defining quantitative criteria for evaluating the perceptual effectiveness of generated plots. This paper summarizes the literature on agnostic VRSs and outlines promising future research directions.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3374571","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243374571","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Agnostic Visual Recommendation Systems: Open Challenges and Future Directions","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243382760","abstract":"Time-stamped event sequences (TSEQs) are time-oriented data without value information, shifting the focus of users to the exploration of temporal event occurrences. TSEQs exist in application domains, such as sleeping behavior, earthquake aftershocks, and stock market crashes. Domain experts face four challenges, for which they could use interactive and visual data analysis methods. First, TSEQs can be large with respect to both the number of sequences and events, often leading to millions of events. Second, domain experts need validated metrics and features to identify interesting patterns. Third, after identifying interesting patterns, domain experts contextualize the patterns to foster sensemaking. Finally, domain experts seek to reduce data complexity by data simplification and machine learning support. We present IVESA, a visual analytics approach for TSEQs. It supports the analysis of TSEQs at the granularities of sequences and events, supported with metrics and feature analysis tools. IVESA has multiple linked views that support overview, sort+filter, comparison, details-on-demand, and metadata relation-seeking tasks, as well as data simplification through feature analysis, interactive clustering, filtering, and motif detection and simplification. We evaluated IVESA with three case studies and a user study with six domain experts working with six different datasets and applications. Results demonstrate the usability and generalizability of IVESA across applications and cases that had up to 1,000,000 events.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3382760","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243382760","image_caption":"","keywords":["Time-Stamped Event Sequences, Time-Oriented Data, Visual Analytics, Data-First Design Study, Iterative Design, Visual Interfaces, User Evaluation"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Visual Analysis of Time-Stamped Event Sequences","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243385118","abstract":"Genomics is at the core of precision medicine, and there are high expectations on genomics-enabled improvement of patient outcomes in the years to come. Around the world, initiatives to increase the use of DNA sequencing in clinical routine are being deployed, such as the use of broad panels in the standard care for oncology patients. Such a development comes at the cost of increased demands on throughput in genomic data analysis. In this paper, we use the task of copy number variant (CNV) analysis as a context for exploring visualization concepts for clinical genomics. CNV calls are generated algorithmically, but time-consuming manual intervention is needed to separate relevant findings from irrelevant ones in the resulting large call candidate lists. We present a visualization environment, named Copycat, to support this review task in a clinical scenario. Key components are a scatter-glyph plot replacing the traditional list visualization, and a glyph representation designed for at-a-glance relevance assessments. Moreover, we present results from a formative evaluation of the prototype by domain specialists, from which we elicit insights to guide both prototype improvements and visualization for clinical genomics in general.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3385118","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243385118","image_caption":"","keywords":["Visualization, genomics, copy number variants, clinical decision support, evaluation"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Visualization for diagnostic review of copy number variants in complex DNA sequencing data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243390219","abstract":"This system paper documents the technical foundations for the extension of the Topology ToolKit (TTK) to distributed-memory parallelism with the Message Passing Interface (MPI). While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a versatile approach (supporting both triangulated domains and regular grids) for the support of topological analysis pipelines, i.e. a sequence of topological algorithms interacting together, possibly on distinct numbers of processes. While developing this extension, we faced several algorithmic and software engineering challenges, which we document in this paper. We describe an MPI extension of TTK\u2019s data structure for triangulation representation and traversal, a central component to the global performance and generality of TTK\u2019s topological implementations. We also introduce an intermediate interface between TTK and MPI, both at the global pipeline level, and at the fine-grain algorithmic level. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Detailed performance analyses show that parallel efficiencies range from 20% to 80% (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (120 billion vertices) on a standard cluster with 64 nodes (for a total of 1536 cores). Finally, we provide a roadmap for the completion of TTK\u2019s MPI extension, along with generic recommendations for each algorithm communication category.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3390219","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243390219","image_caption":"","keywords":["Topological data analysis, high-performance computing, distributed-memory algorithms."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"TTK is Getting MPI-Ready","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243368621","abstract":"The use of natural language interfaces (NLIs) to create charts is becoming increasingly popular due to the intuitiveness of natural language interactions. One key challenge in this approach is to accurately capture user intents and transform them to proper chart specifications. This obstructs the wide use of NLI in chart generation, as users' natural language inputs are generally abstract (i.e., ambiguous or under-specified), without a clear specification of visual encodings. Recently, pre-trained large language models (LLMs) have exhibited superior performance in understanding and generating natural language, demonstrating great potential for downstream tasks. Inspired by this major trend, we propose ChartGPT, generating charts from abstract natural language inputs. However, LLMs are struggling to address complex logic problems. To enable the model to accurately specify the complex parameters and perform operations in chart generation, we decompose the generation process into a step-by-step reasoning pipeline, so that the model only needs to reason a single and specific sub-task during each run. Moreover, LLMs are pre-trained on general datasets, which might be biased for the task of chart generation. To provide adequate visualization knowledge, we create a dataset consisting of abstract utterances and charts and improve model performance through fine-tuning. We further design an interactive interface for ChartGPT that allows users to check and modify the intermediate outputs of each step. The effectiveness of the proposed system is evaluated through quantitative evaluations and a user study.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3368621","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243368621","image_caption":"","keywords":["Natural language interfaces, large language models, data visualization"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243383089","abstract":"The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3383089","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243383089","image_caption":"","keywords":["Representation Learning, Multi-view Visualization, Visual Storytelling, Visualization Embedding"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Chart2Vec: A Universal Embedding of Context-Aware Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243392587","abstract":"The issue of traffic congestion poses a significant obstacle to the development of global cities. One promising solution to tackle this problem is intelligent traffic signal control (TSC). Recently, TSC strategies leveraging reinforcement learning (RL) have garnered attention among researchers. However, the evaluation of these models has primarily relied on fixed metrics like reward and queue length. This limited evaluation approach provides only a narrow view of the model\u2019s decision-making process, impeding its practical implementation. Moreover, effective TSC necessitates coordinated actions across multiple intersections. Existing visual analysis solutions fall short when applied in multi-agent settings. In this study, we delve into the challenge of interpretability in multi-agent reinforcement learning (MARL), particularly within the context of TSC. We propose MARLens, a visual analytics system tailored to understand MARL-based TSC. Our system serves as a versatile platform for both RL and TSC researchers. It empowers them to explore the model\u2019s features from various perspectives, revealing its decision-making processes and shedding light on interactions among different agents. To facilitate quick identification of critical states, we have devised multiple visualization views, complemented by a traffic simulation module that allows users to replay specific training scenarios. To validate the utility of our proposed system, we present three comprehensive case studies, incorporate insights from domain experts through interviews, and conduct a user study. These collective efforts underscore the feasibility and effectiveness of MARLens in enhancing our understanding of MARL-based TSC systems and pave the way for more informed and efficient traffic management strategies.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3392587","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243392587","image_caption":"","keywords":["Traffic signal control, multi-agent, reinforcement learning, visual analytics"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243392476","abstract":"Areas of interest (AOIs) are well-established means of providing semantic information for visualizing, analyzing, and classifying gaze data. However, the usual manual annotation of AOIs is time consuming and further impaired by ambiguities in label assignments. To address these issues, we present an interactive labeling approach that combines visualization, machine learning, and user-centered explainable annotation. Our system provides uncertainty-aware visualization to build trust in classification with an increasing number of annotated examples. It combines specifically designed EyeFlower glyphs, dimensionality reduction, and selection and exploration techniques in an integrated workflow. The approach is versatile and hardware-agnostic, supporting video stimuli from stationary and unconstrained mobile eye trackin alike. We conducted an expert review to assess labeling strategies and trust building.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3392476","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243392476","image_caption":"","keywords":["Visual analytics, eye tracking, uncertainty, active learning, trust building"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Active Gaze Labeling: Visualization for Trust Building","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233324851","abstract":"Dimensionality reduction (DR) algorithms are diverse and widely used for analyzing high-dimensional data. Various metrics and tools have been proposed to evaluate and interpret the DR results. However, most metrics and methods fail to be well generalized to measure any DR results from the perspective of original distribution fidelity or lack interactive exploration of DR results. There is still a need for more intuitive and quantitative analysis to interactively explore high-dimensional data and improve interpretability. We propose a metric and a generalized algorithm-agnostic approach based on the concept of capacity to evaluate and analyze the DR results. Based on our approach, we develop a visual analytic system HiLow for exploring high-dimensional data and projections. We also propose a mixed-initiative recommendation algorithm that assists users in interactively DR results manipulation. Users can compare the differences in data distribution after the interaction through HiLow. Furthermore, we propose a novel visualization design focusing on quantitative analysis of differences between high and low-dimensional data distributions. Finally, through user study and case studies, we validate the effectiveness of our approach and system in enhancing the interpretability of projections and analyzing the distribution of high and low-dimensional data.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3324851","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233324851","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Interpreting High-Dimensional Projections With Capacity","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243394745","abstract":"The fund investment industry heavily relies on the expertise of fund managers, who bear the responsibility of managing portfolios on behalf of clients. With their investment knowledge and professional skills, fund managers gain a competitive advantage over the average investor in the market. Consequently, investors prefer entrusting their investments to fund managers rather than directly investing in funds. For these investors, the primary concern is selecting a suitable fund manager. While previous studies have employed quantitative or qualitative methods to analyze various aspects of fund managers, such as performance metrics, personal characteristics, and performance persistence, they often face challenges when dealing with a large candidate space. Moreover, distinguishing whether a fund manager's performance stems from skill or luck poses a challenge, making it difficult to align with investors' preferences in the selection process. To address these challenges, this study characterizes the requirements of investors in selecting suitable fund managers and proposes an interactive visual analytics system called FMLens. This system streamlines the fund manager selection process, allowing investors to efficiently assess and deconstruct fund managers' investment styles and abilities across multiple dimensions. Additionally, the system empowers investors to scrutinize and compare fund managers' performances. The effectiveness of the approach is demonstrated through two case studies and a qualitative user study. Feedback from domain experts indicates that the system excels in analyzing fund managers from diverse perspectives, enhancing the efficiency of fund manager evaluation and selection.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3394745","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243394745","image_caption":"","keywords":["Financial Data, Fund Manager Selection, Visual Analytics"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233336588","abstract":"This article explores how the ability to recall information in data visualizations depends on the presentation technology. Participants viewed 10 Isotype visualizations on a 2D screen, in 3D, in Virtual Reality (VR) and in Mixed Reality (MR). To provide a fair comparison between the three 3D conditions, we used LIDAR to capture the details of the physical rooms, and used this information to create our textured 3D models. For all environments, we measured the number of visualizations recalled and their order (2D) or spatial location (3D, VR, MR). We also measured the number of syntactic and semantic features recalled. Results of our study show increased recall and greater richness of data understanding in the MR condition. Not only did participants recall more visualizations and ordinal/spatial positions in MR, but they also remembered more details about graph axes and data mappings, and more information about the shape of the data. We discuss how differences in the spatial and kinesthetic cues provided in these different environments could contribute to these results, and reasons why we did not observe comparable performance in the 3D and VR conditions.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3336588","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233336588","image_caption":"","keywords":["Data visualization, Three-dimensional displays, Virtual reality, Mixed reality, Electronic mail, Syntactics, Semantics"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243372104","abstract":"With the rise of short-form video platforms and the increasing availability of data, we see the potential for people to share short-form videos embedded with data in situ (e.g., daily steps when running) to increase the credibility and expressiveness of their stories. However, creating and sharing such videos in situ is challenging since it involves multiple steps and skills (e.g., data visualization creation and video editing), especially for amateurs. By conducting a formative study (N=10) using three design probes, we collected the motivations and design requirements. We then built VisTellAR, a mobile AR authoring tool, to help amateur video creators embed data visualizations in short-form videos in situ. A two-day user study shows that participants (N=12) successfully created various videos with data visualizations in situ and they confirmed the ease of use and learning. AR pre-stage authoring was useful to assist people in setting up data visualizations in reality with more designs in camera movements and interaction with gestures and physical objects to storytelling.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3372104","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243372104","image_caption":"","keywords":["Personal data, augmented reality, data visualization, storytelling, short-form video"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243406387","abstract":"The process of labeling medical text plays a crucial role in medical research. Nonetheless, creating accurately labeled medical texts of high quality is often a time-consuming task that requires specialized domain knowledge. Traditional methods for generating labeled data typically rely on rigid rule-based approaches, which may not adapt well to new tasks. While recent machine learning (ML) methodologies have mitigated the manual labeling efforts, configuring models to align with specific research requirements can be challenging for labelers without technical expertise. Moreover, automated labeling techniques, such as transfer learning, face difficulties in in directly incorporating expert input, whereas semi-automated methods, like data programming, allow knowledge integration through rules or knowledge bases but may lack continuous result refinement throughout the entire labeling process. In this study, we present a collaborative human-ML teaming workflow that seamlessly integrates visual cluster analysis and active learning to assist domain experts in labeling medical text with high efficiency. Additionally, we introduce an innovative neural network model called the embedding network, which incorporates expert insights to generate task-specific embeddings for medical texts. We integrate the workflow and embedding network into a visual analytics tool named KMTLabeler, equipped with coordinated multi-level views and interactions. Two illustrative case studies, along with a controlled user study, provide substantial evidence of the effectiveness of KMTLabeler in creating an efficient labeling environment for medical text classification.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3406387","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243406387","image_caption":"","keywords":["Medical Text Labeling, Expert Knowledge, Embedding Network, Visual Cluster Analysis, Active Learning"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243411786","abstract":"We present a novel method for the interactive construction and rendering of extremely large molecular scenes, capable of representing multiple biological cells in atomistic detail. Our method is tailored for scenes, which are procedurally constructed, based on a given set of building rules. Rendering of large scenes normally requires the entire scene available in-core, or alternatively, it requires out-of-core management to load data into the memory hierarchy as a part of the rendering loop. Instead of out-of-core memory management, we propose to procedurally generate the scene on-demand on the fly. The key idea is a positional- and view-dependent procedural scene-construction strategy, where only a fraction of the atomistic scene around the camera is available in the GPU memory at any given time. The atomistic detail is populated into a uniform-space partitioning using a grid that covers the entire scene. Most of the grid cells are not filled with geometry, only those are populated that are potentially seen by the camera. The atomistic detail is populated in a compute shader and its representation is connected with acceleration data structures for hardware ray-tracing of modern GPUs. Objects which are far away, where atomistic detail is not perceivable from a given viewpoint, are represented by a triangle mesh mapped with a seamless texture, generated from the rendering of geometry from atomistic detail. The algorithm consists of two pipelines, the construction-compute pipeline, and the rendering pipeline, which work together to render molecular scenes at an atomistic resolution far beyond the limit of the GPU memory containing trillions of atoms. We demonstrate our technique on multiple models of SARS-CoV-2 and the red blood cell.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3411786","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243411786","image_caption":"","keywords":["Interactive rendering, view-guided scene construction, biological data, hardware ray tracing"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"\u201cNanomatrix: Scalable Construction of Crowded Biological Environments\u201d","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243408255","abstract":"Generative text-to-image models, which allow users to create appealing images through a text prompt, have seen a dramatic increase in popularity in recent years. However, most users have a limited understanding of how such models work and often rely on trial and error strategies to achieve satisfactory results. The prompt history contains a wealth of information that could provide users with insights into what has been explored and how the prompt changes impact the output image, yet little research attention has been paid to the visual analysis of such process to support users. We propose the Image Variant Graph, a novel visual representation designed to support comparing prompt-image pairs and exploring the editing history. The Image Variant Graph models prompt differences as edges between corresponding images and presents the distances between images through projection. Based on the graph, we developed the PrompTHis system through co-design with artists. Based on the review and analysis of the prompting history, users can better understand the impact of prompt changes and have a more effective control of image generation. A quantitative user study and qualitative interviews demonstrate that PrompTHis can help users review the prompt history, make sense of the model, and plan their creative process.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3408255","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243408255","image_caption":"","keywords":["Text visualization, image visualization, text-to-image generation, editing history, provenance, generative art"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20223193756","abstract":"Information visualization uses various types of representations to encode data into graphical formats. Prior work on visualization techniques has evaluated the accuracy of perceived numerical data values from visual data encodings such as graphical position, length, orientation, size, and color. Our work aims to extend the research of graphical perception to the use of motion as data encodings for quantitative values. We present two experiments implementing multiple fundamental aspects of motion such as type, speed, and synchronicity that can be used for numerical value encoding as well as comparing motion to static visual encodings in terms of user perception and accuracy. We studied how well users can assess the differences between several types of motion and static visual encodings and present an updated ranking of accuracy for quantitative judgments. Our results indicate that non-synchronized motion can be interpreted more quickly and more accurately than synchronized motion. Moreover, our ranking of static and motion visual representations shows that motion, especially expansion and translational types, has great potential as a data encoding technique for quantitative value. Finally, we discuss the implications for the use of animation and motion for numerical representations in data visualization.","authors":[],"award":"","doi":"10.1109/TVCG.2022.3193756","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20223193756","image_caption":"","keywords":["Information visualization, animation and motion-related techniques, empirical study, graphical perception, evaluation."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243402610","abstract":"Point clouds are widely used as a versatile representation of 3D entities and scenes for all scale domains and in a variety of application areas, serving as a fundamental data category to directly convey spatial features. However, due to point sparsity, lack of structure, irregular distribution, and acquisition-related inaccuracies, results of point cloud visualization are often subject to visual complexity and ambiguity. In this regard, non-photorealistic rendering can improve visual communication by reducing the cognitive effort required to understand an image or scene and by directing attention to important features. In the last 20 years, this has been demonstrated by various non-photorealistic r rendering approaches that were proposed to target point clouds specifically. However, they do not use a common language or structure for assessment which complicates comparison and selection. Further, recent developments regarding point cloud characteristics and processing, such as massive data size or web-based rendering are rarely considered. To address these issues, we present a survey on non-photorealistic rendering approaches for point cloud visualization, providing an overview of the current state of research. We derive a structure for the assessment of approaches, proposing seven primary dimensions for the categorization regarding intended goals, data requirements, used techniques, and mode of operation. We then systematically assess corresponding approaches and utilize this classification to identify trends and research gaps, motivating future research in the development of effective non-photorealistic point cloud rendering methods.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3402610","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243402610","image_caption":"","keywords":["Point clouds, survey, non-photorealistic rendering"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243413195","abstract":"With the growing complexity and volume of data, visualizations have become more intricate, often requiring advanced techniques to convey insights. These complex charts are prevalent in everyday life, and individuals who lack knowledge in data visualization may find them challenging to understand. This paper investigates using Large Language Models (LLMs) to help users with low data literacy understand complex visualizations. While previous studies focus on text interactions with users, we noticed that visual cues are also critical for interpreting charts. We introduce an LLM application that supports both text and visual interaction for guiding chart interpretation. Our study with 26 participants revealed that the in-situ support effectively assisted users in interpreting charts and enhanced learning by addressing specific chart-related questions and encouraging further exploration. Visual communication allowed participants to convey their interests straightforwardly, eliminating the need for textual descriptions. However, the LLM assistance led users to engage less with the system, resulting in fewer insights from the visualizations. This suggests that users, particularly those with lower data literacy and motivation, may have over-relied on the LLM agent. We discuss opportunities for deploying LLMs to enhance visualization literacy while emphasizing the need for a balanced approach.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3413195","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243413195","image_caption":"","keywords":["Visualization literacy, Large language model, Visual communication"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243397004","abstract":"Data charts are prevalent across various fields due to their efficacy in conveying complex data relationships. However, static charts may sometimes struggle to engage readers and efficiently present intricate information, potentially resulting in limited understanding. We introduce \u201cLive Charts,\u201d a new format of presentation that decomposes complex information within a chart and explains the information pieces sequentially through rich animations and accompanying audio narration. We propose an automated approach to revive static charts into Live Charts. Our method integrates GNN-based techniques to analyze the chart components and extract data from charts. Then we adopt large natural language models to generate appropriate animated visuals along with a voice-over to produce Live Charts from static ones. We conducted a thorough evaluation of our approach, which involved the model performance, use cases, a crowd-sourced user study, and expert interviews. The results demonstrate Live Charts offer a multi-sensory experience where readers can follow the information and understand the data insights better. We analyze the benefits and drawbacks of Live Charts over static charts as a new information consumption experience.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3397004","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243397004","image_caption":"","keywords":["Charts, storytelling, machine learning, automatic visualization"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Reviving Static Charts into Live Charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243376406","abstract":"Visualizing event timelines for collaborative text writing is an important application for navigating and understanding such data, as time passes and the size and complexity of both text and timeline increase. They are often employed by applications such as code repositories and collaborative text editors. In this paper, we present a visualization tool to explore historical records of writing of legislative texts, which were discussed and voted on by an assembly of representatives. Our visualization focuses on event timelines from text documents that involve multiple people and different topics, allowing for observation of different proposed versions of said text or tracking data provenance of given text sections, while highlighting the connections between all elements involved. We also describe the process of designing such a tool alongside domain experts, with three steps of evaluation being conducted to verify the effectiveness of our design.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3376406","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243376406","image_caption":"","keywords":["Data visualization, Collaboration, History, Humanities, Writing, Navigation, Metadata"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243411575","abstract":"Creating an animated data video with audio narration is a time-consuming and complex task that requires expertise. It involves designing complex animations, turning written scripts into audio narrations, and synchronizing visual changes with the narrations. This paper presents WonderFlow, an interactive authoring tool, that facilitates narration-centric design of animated data videos. WonderFlow allows authors to easily specify semantic links between text and the corresponding chart elements. Then it automatically generates audio narration by leveraging text-to-speech techniques and aligns the narration with an animation. WonderFlow provides a structure-aware animation library designed to ease chart animation creation, enabling authors to apply pre-designed animation effects to common visualization components. Additionally, authors can preview and refine their data videos within the same system, without having to switch between different creation tools. A series of evaluation results confirmed that WonderFlow is easy to use and simplifies the creation of data videos with narration-animation interplay.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3411575","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243411575","image_caption":"","keywords":["Data video, Data visualization, Narration-animation interplay, Storytelling, Authoring tool"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"WonderFlow: Narration-Centric Design of Animated Data Videos","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233333356","abstract":"As urban populations grow, effectively accessing urban performance measures such as livability and comfort becomes increasingly important due to their significant socioeconomic impacts. While Point of Interest (POI) data has been utilized for various applications in location-based services, its potential for urban performance analytics remains unexplored. In this paper, we present SenseMap, a novel approach for analyzing urban performance by leveraging POI data as a semantic representation of urban functions. We quantify the contribution of POIs to different urban performance measures by calculating semantic textual similarities on our constructed corpus. We propose Semantic-adaptive Kernel Density Estimation which takes into account POIs\u2019 in\ufb02uential areas across different Traf\ufb01c Analysis Zones and semantic contributions to generate semantic density maps for measures. We design and implement a feature-rich, real-time visual analytics system for users to explore the urban performance of their surroundings. Evaluations with human judgment and reference data demonstrate the feasibility and validity of our method. Usage scenarios and user studies demonstrate the capability, usability, and explainability of our system.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3333356","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233333356","image_caption":"","keywords":["Urban data, semantic textual similarity, point of interest, density map, visual analytics, visualization design"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243402834","abstract":"Impact dynamics are crucial for estimating the growth patterns of NFT projects by tracking the diffusion and decay of their relative appeal among stakeholders. Machine learning methods for impact dynamics analysis are incomprehensible and rigid in terms of their interpretability and transparency, whilst stakeholders require interactive tools for informed decision-making. Nevertheless, developing such a tool is challenging due to the substantial, heterogeneous NFT transaction data and the requirements for flexible, customized interactions. To this end, we integrate intuitive visualizations to unveil the impact dynamics of NFT projects. We first conduct a formative study and summarize analysis criteria, including substitution mechanisms, impact attributes, and design requirements from stakeholders. Next, we propose the Minimal Substitution Model to simulate substitutive systems of NFT projects that can be feasibly represented as node-link graphs. Particularly, we utilize attribute-aware techniques to embed the project status and stakeholder behaviors in the layout design. Accordingly, we develop a multi-view visual analytics system, namely NFTracer, allowing interactive analysis of impact dynamics in NFT transactions. We demonstrate the informativeness, effectiveness, and usability of NFTracer by performing two case studies with domain experts and one user study with stakeholders. The studies suggest that NFT projects featuring a higher degree of similarity are more likely to substitute each other. The impact of NFT projects within substitutive systems is contingent upon the degree of stakeholders\u2019 influx and projects\u2019 freshness.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3402834","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243402834","image_caption":"","keywords":["Stakeholders, Nonfungible Tokens, Social Networking Online, Visual Analytics, Network Analyzers, Measurement, Layout, Impact Dynamics Analysis, Non Fungible Tokens NF Ts, NFT Transaction Data, Substitutive Systems, Visual Analytics"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243368060","abstract":"Visual analytics supports data analysis tasks within complex domain problems. However, due to the richness of data types, visual designs, and interaction designs, users need to recall and process a significant amount of information when they visually analyze data. These challenges emphasize the need for more intelligent visual analytics methods. Large language models have demonstrated the ability to interpret various forms of textual data, offering the potential to facilitate intelligent support for visual analytics. We propose LEVA, a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. To support onboarding, we use large language models to interpret visualization designs and view relationships based on system specifications. For exploration, we use large language models to recommend insights based on the analysis of system status and data to facilitate mixed-initiative exploration. For summarization, we present a selective reporting strategy to retrace analysis history through a stream visualization and generate insight reports with the help of large language models. We demonstrate how LEVA can be integrated into existing visual analytics systems. Two usage scenarios and a user study suggest that LEVA effectively aids users in conducting visual analytics.","authors":[],"award":"","doi":"10.1109/TVCG.2024.3368060","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243368060","image_caption":"","keywords":["Insight recommendation, mixed-initiative, interface agent, large language models, visual analytics"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"LEVA: Using Large Language Models to Enhance Visual Analytics","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20223229017","abstract":"We present V-Mail, a framework of cross-platform applications, interactive techniques, and communication protocols for improved multi-person correspondence about spatial 3D datasets. Inspired by the daily use of e-mail, V-Mail seeks to enable a similar style of rapid, multi-person communication accessible on any device; however, it aims to do this in the new context of spatial 3D communication, where limited access to 3D graphics hardware typically prevents such communication. The approach integrates visual data storytelling with data exploration, spatial annotations, and animated transitions. V-Mail ``data stories'' are exported in a standard video file format to establish a common baseline level of access on (almost) any device. The V-Mail framework also includes a series of complementary client applications and plugins that enable different degrees of story co-authoring and data exploration, adjusted automatically to match the capabilities of various devices. A lightweight, phone-based V-Mail app makes it possible to annotate data by adding captions to the video. These spatial annotations are then immediately accessible to team members running high-end 3D graphics visualization systems that also include a V-Mail client, implemented as a plugin. Results and evaluation from applying V-Mail to assist communication within an interdisciplinary science team studying Antarctic ice sheets confirm the utility of the asynchronous, cross-platform collaborative framework while also highlighting some current limitations and opportunities for future work.","authors":[],"award":"","doi":"10.1109/TVCG.2022.3229017","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20223229017","image_caption":"","keywords":["Human-computer interaction, visualization of scientific 3D data, communication, storytelling, immersive analytics"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233261320","abstract":"In recent years, narrative visualization has gained much attention. Researchers have proposed different design spaces for various narrative visualization genres and scenarios to facilitate the creation process. As users' needs grow and automation technologies advance, increasingly more tools have been designed and developed. In this study, we summarized six genres of narrative visualization (annotated charts, infographics, timelines & storylines, data comics, scrollytelling & slideshow, and data videos) based on previous research and four types of tools (design spaces, authoring tools, ML/AI-supported tools and ML/AI-generator tools) based on the intelligence and automation level of the tools. We surveyed 105 papers and tools to study how automation can progressively engage in visualization design and narrative processes to help users easily create narrative visualizations. This research aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.","authors":[],"award":"","doi":"10.1109/TVCG.2023.3261320","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233261320","image_caption":"","keywords":["Data Visualization, Automatic Visualization, Narrative Visualization, Design Space, Authoring Tools, Survey"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-9745375","abstract":"We consider the general problem known as job shop scheduling, in which multiple jobs consist of sequential operations that need to be executed or served by appropriate machines having limited capacities. For example, train journeys (jobs) consist of moves and stops (operations) to be served by rail tracks and stations (machines). A schedule is an assignment of the job operations to machines and times where and when they will be executed. The developers of computational methods for job scheduling need tools enabling them to explore how their methods work. At a high level of generality, we define the system of pertinent exploration tasks and a combination of visualizations capable of supporting the tasks. We provide general descriptions of the purposes, contents, visual encoding, properties, and interactive facilities of the visualizations and illustrate them with images from an example implementation in air traffic management. We justify the design of the visualizations based on the tasks, principles of creating visualizations for pattern discovery, and scalability requirements. The outcomes of our research are sufficiently general to be of use in a variety of applications.","authors":[{"affiliations":"","email":"gennady.andrienko@iais.fraunhofer.de","is_corresponding":true,"name":"Gennady Andrienko"},{"affiliations":"","email":"natalia.andrienko@iais.fraunhofer.de","is_corresponding":false,"name":"Natalia Andrienko"},{"affiliations":"","email":"jmcordero@e-crida.enaire.es","is_corresponding":false,"name":"Jose Manuel Cordero Garcia"},{"affiliations":"","email":"dirk.hecker@iais.fraunhofer.de","is_corresponding":false,"name":"Dirk Hecker"},{"affiliations":"","email":"georgev@unipi.gr","is_corresponding":false,"name":"George A. Vouros"}],"award":"","doi":"10.1109/MCG.2022.3163437","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"9745375","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-9745375","image_caption":"","keywords":["Visualization, Schedules, Task Analysis, Optimization, Job Shop Scheduling, Data Analysis, Processor Scheduling, Iterative Methods"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Supporting Visual Exploration of Iterative Job Scheduling","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-9612019","abstract":"The number of online news articles available nowadays is rapidly increasing. When exploring articles on online news portals, navigation is mostly limited to the most recent ones. The spatial context and the history of topics are not immediately accessible. To support readers in the exploration or research of articles in large datasets, we developed an interactive 3D globe visualization. We worked with datasets from multiple online news portals containing up to 45,000 articles. Using agglomerative hierarchical clustering, we represent the referenced locations of news articles on a globe with different levels of detail. We employ two interaction schemes for navigating the viewpoint on the visualization, including support for hand-held devices and desktop PCs, and provide search functionality and interactive filtering. Based on this framework, we explore additional modules for jointly exploring the spatial and temporal domain of the dataset and incorporating live news into the visualization.","authors":[{"affiliations":"","email":"nicholas.ingulfsen@gmail.com","is_corresponding":false,"name":"Nicholas Ingulfsen"},{"affiliations":"","email":"simone.schaub@visinf.tu-darmstadt.de","is_corresponding":false,"name":"Simone Schaub-Meyer"},{"affiliations":"","email":"grossm@inf.ethz.ch","is_corresponding":false,"name":"Markus Gross"},{"affiliations":"","email":"tobias.guenther@fau.de","is_corresponding":true,"name":"Tobias G\u00fcnther"}],"award":"","doi":"10.1109/MCG.2021.3127434","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"9612019","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-9612019","image_caption":"","keywords":["News Articles, Number Of Articles, Headlines, Interactive Visualization, Online News, Agglomerative Clustering, Local News, Interactive Exploration, Desktop PC, Different Levels Of Detail, News Portals, Spatial Information, User Study, 3D Space, Human-computer Interaction, Temporal Information, Third Dimension, Tablet Computer, Pie Chart, News Stories, 3D Visualization, Article Details, Visual Point, Bottom Of The Screen, Geospatial Data, Type Of Visualization, Largest Dataset, Tagging Location, Live Feed"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"News Globe: Visualization of Geolocalized News Articles","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-9866547","abstract":"In many applications, developed deep-learning models need to be iteratively debugged and refined to improve the model efficiency over time. Debugging some models, such as temporal multilabel classification (TMLC) where each data point can simultaneously belong to multiple classes, can be especially more challenging due to the complexity of the analysis and instances that need to be reviewed. In this article, focusing on video activity recognition as an application of TMLC, we propose DETOXER, an interactive visual debugging system to support finding different error types and scopes through providing multiscope explanations.","authors":[{"affiliations":"","email":"mahsannourani@ufl.edu","is_corresponding":true,"name":"Mahsan Nourani"},{"affiliations":"","email":"chiradeep.roy@utdallas.edu","is_corresponding":false,"name":"Chiradeep Roy"},{"affiliations":"","email":"dhoneycutt@ufl.edu","is_corresponding":false,"name":"Donald R. Honeycutt"},{"affiliations":"","email":"eragan@ufl.edu","is_corresponding":false,"name":"Eric D. Ragan"},{"affiliations":"","email":"vibhav.gogate@utdallas.edu","is_corresponding":false,"name":"Vibhav Gogate"}],"award":"","doi":"10.1109/MCG.2022.3201465","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"9866547","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-9866547","image_caption":"","keywords":["Debugging, Analytical Models, Heating Systems, Data Models, Computational Modeling, Activity Recognition, Deep Learning, Multi Label Classification, Visualization Tool, Temporal Classification, Visual Debugging, False Positive, False Negative, Active Components, Deep Learning Models, Types Of Errors, Video Frames, Error Detection, Detection Of Types, Action Recognition, Interactive Visualization, Sequence Of Points, Design Goals, Positive Errors, Critical Outcomes, Error Patterns, Global Panel, False Negative Rate, False Positive Rate, Heatmap, Visual Approach, Truth Labels, True Positive, Confidence Score, Anomaly Detection, Interface Elements"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10091124","abstract":"The Internet of Food (IoF) is an emerging field in smart foodsheds, involving the creation of a knowledge graph (KG) about the environment, agriculture, food, diet, and health. However, the heterogeneity and size of the KG present challenges for downstream tasks, such as information retrieval and interactive exploration. To address those challenges, we propose an interactive knowledge and learning environment (IKLE) that integrates three programming and modeling languages to support multiple downstream tasks in the analysis pipeline. To make IKLE easier to use, we have developed algorithms to automate the generation of each language. In addition, we collaborated with domain experts to design and develop a dataflow visualization system, which embeds the automatic language generations into components and allows users to build their analysis pipeline by dragging and connecting components of interest. We have demonstrated the effectiveness of IKLE through three real-world case studies in smart foodsheds.","authors":[{"affiliations":"","email":"tu.253@osu.edu","is_corresponding":true,"name":"Yamei Tu"},{"affiliations":"","email":"wang.5502@osu.edu","is_corresponding":false,"name":"Xiaoqi Wang"},{"affiliations":"","email":"qiu.580@osu.edu","is_corresponding":false,"name":"Rui Qiu"},{"affiliations":"","email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"},{"affiliations":"","email":"mmmille6@wisc.edu","is_corresponding":false,"name":"Michelle Miller"},{"affiliations":"","email":"jinmeng.rao@wisc.edu","is_corresponding":false,"name":"Jinmeng Rao"},{"affiliations":"","email":"song.gao@wisc.edu","is_corresponding":false,"name":"Song Gao"},{"affiliations":"","email":"prhuber@ucdavis.edu","is_corresponding":false,"name":"Patrick R. Huber"},{"affiliations":"","email":"adhollander@ucdavis.edu","is_corresponding":false,"name":"Allan D. Hollander"},{"affiliations":"","email":"matthew@ic-foods.org","is_corresponding":false,"name":"Matthew Lange"},{"affiliations":"","email":"cgarcia@tacc.utexas.edu","is_corresponding":false,"name":"Christian R. Garcia"},{"affiliations":"","email":"jstubbs@tacc.utexas.edu","is_corresponding":false,"name":"Joe Stubbs"}],"award":"","doi":"10.1109/MCG.2023.3263960","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10091124","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10091124","image_caption":"","keywords":["Learning Environment, Interactive Learning Environments, Programming Language, Visual System, Analysis Pipeline, Patterns In Data, Flow Data, Human-computer Interaction, Food Systems, Information Retrieval, Domain Experts, Language Model, Automatic Generation, Interactive Exploration, Cyberinfrastructure, Pre-trained Language Models, Resource Description Framework, SPARQL Query, DBpedia, Entity Types, Data Visualization, Resilience Analysis, Load Data, Query Results, Supply Chain, Network Flow"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"An Interactive Knowledge and Learning Environment in Smart Foodsheds","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10198358","abstract":"Set visualization facilitates the exploration and analysis of set-type data. However, how sets should be visualized when the data are uncertain is still an open research challenge. To address the problem of depicting uncertainty in set visualization, we ask 1) which aspects of set type data can be affected by uncertainty and 2) which characteristics of uncertainty influence the visualization design. We answer these research questions by first describing a conceptual framework that brings together 1) the information that is primarily relevant in sets (i.e., set membership, set attributes, and element attributes) and 2) different plausible categories of (un)certainty (i.e., certainty, undefined uncertainty as a binary fact, and defined uncertainty as quantifiable measure). Following the structure of our framework, we systematically discuss basic visualization examples of integrating uncertainty in set visualizations. We draw on existing knowledge about general uncertainty visualization and previous evidence of its effectiveness.","authors":[{"affiliations":"","email":"christian.tominski@uni-rostock.de","is_corresponding":false,"name":"Christian Tominski"},{"affiliations":"","email":"m.behrisch@uu.nl","is_corresponding":true,"name":"Michael Behrisch"},{"affiliations":"","email":"susanne.bleisch@fhnw.ch","is_corresponding":false,"name":"Susanne Bleisch"},{"affiliations":"","email":"sara.fabrikant@geo.uzh.ch","is_corresponding":false,"name":"Sara Irina Fabrikant"},{"affiliations":"","email":"eva.mayr@donau-uni.ac.at","is_corresponding":false,"name":"Eva Mayr"},{"affiliations":"","email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":"","email":"helen.purchase@monash.edu","is_corresponding":false,"name":"Helen Purchase"}],"award":"","doi":"10.1109/MCG.2023.3300441","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10198358","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10198358","image_caption":"","keywords":["Uncertainty, Data Visualization, Measurement Uncertainty, Visual Analytics, Terminology, Task Analysis, Surveys, Conceptual Framework, Cardinality, Data Visualization, Visual Representation, Measure Of The Amount, Set Membership, Intersection Set, Visual Design, Different Types Of Uncertainty, Missing Values, Visual Methods, Fuzzy Set, Age Of Students, Color Values, Uncertainty Values, Explicit Representation, Aggregate Value, Exact Information, Uncertain Information, Table Cells, Temporal Uncertainty, Uncertain Data, Representation Of Uncertainty, Implicit Representation, Spatial Uncertainty, Point Symbol, Visual Clutter, Color Hue, Graphical Elements, Uncertain Value"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Visualizing Uncertainty in Sets","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10227838","abstract":"We report a study investigating the viability of using interactive visualizations to aid architectural design with building codes. While visualizations have been used to support general architectural design exploration, existing computational solutions treat building codes as separate from, rather than part of, the design process, creating challenges for architects. Through a series of participatory design studies with professional architects, we found that interactive visualizations have promising potential to aid design exploration and sensemaking in early stages of architectural design by providing feedback about potential allowances and consequences of design decisions. However, implementing a visualization system necessitates addressing the complexity and ambiguity inherent in building codes. To tackle these challenges, we propose various user-driven knowledge management mechanisms for integrating, negotiating, interpreting, and documenting building code rules.","authors":[{"affiliations":"","email":"snowak@sfu.ca","is_corresponding":true,"name":"Stan Nowak"},{"affiliations":"","email":"bon.aseniero@autodesk.com","is_corresponding":false,"name":"Bon Adriel Aseniero"},{"affiliations":"","email":"lyn@sfu.ca","is_corresponding":false,"name":"Lyn Bartram"},{"affiliations":"","email":"tovi@dgp.toronto.edu","is_corresponding":false,"name":"Tovi Grossman"},{"affiliations":"","email":"George.fitzmaurice@autodesk.com","is_corresponding":false,"name":"George Fitzmaurice"},{"affiliations":"","email":"justin.matejka@autodesk.com","is_corresponding":false,"name":"Justin Matejka"}],"award":"","doi":"10.1109/MCG.2023.3307971","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10227838","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10227838","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10078374","abstract":"Existing dynamic weighted graph visualization approaches rely on users\u2019 mental comparison to perceive temporal evolution of dynamic weighted graphs, hindering users from effectively analyzing changes across multiple timeslices. We propose DiffSeer, a novel approach for dynamic weighted graph visualization by explicitly visualizing the differences of graph structures (e.g., edge weight differences) between adjacent timeslices. Specifically, we present a novel nested matrix design that overviews the graph structure differences over a time period as well as shows graph structure details in the timeslices of user interest. By collectively considering the overall temporal evolution and structure details in each timeslice, an optimization-based node reordering strategy is developed to group nodes with similar evolution patterns and highlight interesting graph structure details in each timeslice. We conducted two case studies on real-world graph datasets and in-depth interviews with 12 target users to evaluate DiffSeer. The results demonstrate its effectiveness in visualizing dynamic weighted graphs.","authors":[{"affiliations":"","email":"wenxiaolin@stu.scu.edu.cn","is_corresponding":false,"name":"Xiaolin Wen"},{"affiliations":"","email":"yongwang@smu.edu.sg","is_corresponding":true,"name":"Yong Wang"},{"affiliations":"","email":"wumeixuan@stu.scu.edu.cn","is_corresponding":false,"name":"Meixuan Wu"},{"affiliations":"","email":"wangfengjie@stu.scu.edu.cn","is_corresponding":false,"name":"Fengjie Wang"},{"affiliations":"","email":"xuanwu.yue@connect.ust.hk","is_corresponding":false,"name":"Xuanwu Yue"},{"affiliations":"","email":"shenqm@sustech.edu.cn","is_corresponding":false,"name":"Qiaomu Shen"},{"affiliations":"","email":"mayx@sustech.edu.cn","is_corresponding":false,"name":"Yuxin Ma"},{"affiliations":"","email":"zhumin@scu.edu.cn","is_corresponding":false,"name":"Min Zhu"}],"award":"","doi":"10.1109/MCG.2023.3248289","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10078374","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10078374","image_caption":"","keywords":["Visibility Graph, Spatial Patterns, Weight Change, In-depth Interviews, Temporal Changes, Temporal Evolution, Negative Changes, Interesting Patterns, Edge Weights, Real-world Datasets, Graph Structure, Visual Approach, Dynamic Visualization, Dynamic Graph, Financial Networks, Graph Datasets, Similar Evolutionary Patterns, User Interviews, Similar Changes, Chinese New Year, Sector Indices, Original Graph, Red Rectangle, Nodes In Order, Stock Market Crash, Stacked Bar Charts, Different Types Of Matrices, Chinese New, Blue Rectangle"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"DiffSeer: Difference-Based Dynamic Weighted Graph Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10128890","abstract":"Some 15 years ago, Visualization Viewpoints published an influential article titled Rainbow Color Map (Still) Considered Harmful (Borland and Taylor, 2007). The paper argued that the \u201crainbow colormap\u2019s characteristics of confusing the viewer, obscuring the data and actively misleading interpretation make it a poor choice for visualization.\u201d Subsequent articles often repeat and extend these arguments, so much so that avoiding rainbow colormaps, along with their derivatives, has become dogma in the visualization community. Despite this loud and persistent recommendation, scientists continue to use rainbow colormaps. Have we failed to communicate our message, or do rainbow colormaps offer advantages that have not been fully appreciated? We argue that rainbow colormaps have properties that are underappreciated by existing design conventions. We explore key critiques of the rainbow in the context of recent research to understand where and how rainbows might be misunderstood. Choosing a colormap is a complex task, and rainbow colormaps can be useful for selected applications.","authors":[{"affiliations":"","email":"cware@ccom.unh.edu","is_corresponding":false,"name":"Colin Ware"},{"affiliations":"","email":"mstone@acm.org","is_corresponding":true,"name":"Maureen Stone"},{"affiliations":"","email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"}],"award":"","doi":"10.1109/MCG.2023.3246111","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10128890","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10128890","image_caption":"","keywords":["Image Color Analysis, Semantics, Data Visualization, Estimation, Reliability Engineering"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Rainbow Colormaps Are Not All Bad","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10207831","abstract":"The membership function is to categorize quantities along with a confidence degree. This article investigates a generic user interaction based on this function for categorizing various types of quantities without modification, which empowers users to articulate uncertainty categorization and enhance their visual data analysis significantly. We present the technique design and an online prototype, supplementing with insights from three case studies that highlight the technique\u2019s efficacy among different types of quantities. Furthermore, we conduct a formal user study to scrutinize the process and reasoning users employ while utilizing our technique. The findings indicate that our technique can help users create customized categories. Both our code and the interactive prototype are made available as open-source resources, intended for application across varied domains as a generic tool.","authors":[{"affiliations":"","email":"liuliqun.cs@gmail.com","is_corresponding":true,"name":"Liqun Liu"},{"affiliations":"","email":"romain.vuillemot@ec-lyon.fr","is_corresponding":false,"name":"Romain Vuillemot"}],"award":"","doi":"10.1109/MCG.2023.3301449","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10207831","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10207831","image_caption":"","keywords":["Data Visualization, Uncertainty, Prototypes, Fuzzy Logic, Image Color Analysis, Fuzzy Sets, Open Source Software, General Function, Membership Function, User Study, Classification Process, Fuzzy Logic, Quantitative Values, Visualization Techniques, Amount Of Type, Fuzzy Theory, General Interaction, Temperature Dataset, Interaction Techniques, Carbon Dioxide, Computation Time, Rule Based, Web Page, Real World Scenarios, Fuzzy Set, Domain Experts, Supercritical CO 2, Parallel Coordinates, Fuzzy System, Fuzzy Clustering, Interactive Visualization, Amount Of Items, Large Scale Problems"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"A Generic Interactive Membership Function for Categorization of Quantities","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10201383","abstract":"Although visualizations are a useful tool for helping people to understand information, they can also have unintended effects on human cognition. This is especially true for uncertain information, which is difficult for people to understand. Prior work has found that different methods of visualizing uncertain information can produce different patterns of decision making from users. However, uncertainty can also be represented via text or numerical information, and few studies have systematically compared these types of representations to visualizations of uncertainty. We present two experiments that compared visual representations of risk (icon arrays) to numerical representations (natural frequencies) in a wildfire evacuation task. Like prior studies, we found that different types of visual cues led to different patterns of decision making. In addition, our comparison of visual and numerical representations of risk found that people were more likely to evacuate when they saw visualizations than when they saw numerical representations. These experiments reinforce the idea that design choices are not neutral: seemingly minor differences in how information is represented can have important impacts on human risk perception and decision making.","authors":[{"affiliations":"","email":"lematze@sandia.gov","is_corresponding":true,"name":"Laura E. Matzen"},{"affiliations":"","email":"bchowel@sandia.gov","is_corresponding":false,"name":"Breannan C. Howell"},{"affiliations":"","email":"mctrumb@sandia.gov","is_corresponding":false,"name":"Michael C. S. Trumbo"},{"affiliations":"","email":"kmdivis@sandia.gov","is_corresponding":false,"name":"Kristin M. Divis"}],"award":"","doi":"10.1109/MCG.2023.3299875","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10201383","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10201383","image_caption":"","keywords":["Visualization, Uncertainty, Decision Making, Costs, Task Analysis, Laboratories, Information Analysis, Decision Making, Visual Representation, Numerical Representation, Decision Patterns, Deterministic, Risk Perception, Specific Information, Fundamental Frequency, Point Values, Representation Of Information, Risk Information, Visual Conditions, Numerous Conditions, Human Decision, Numerical Information, Impact Of Different Types, Uncertain Information, Type Of Visualization, Differences In Risk Perception, Representation Of Uncertainty, Increase In Participation, Participants In Experiment, Individual Difference Measures, Sandia National Laboratories, Risk Propensity, Bonus Payments, Average Response Time, Difference In Probability, Response Time"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10414267","abstract":"Traditional approaches to data visualization have often focused on comparing different subsets of data, and this is reflected in the many techniques developed and evaluated over the years for visual comparison. Similarly, common workflows for exploratory visualization are built upon the idea of users interactively applying various filter and grouping mechanisms in search of new insights. This paradigm has proven effective at helping users identify correlations between variables that can inform thinking and decision-making. However, recent studies show that consumers of visualizations often draw causal conclusions even when not supported by the data. Motivated by these observations, this article highlights recent advances from a growing community of researchers exploring methods that aim to directly support visual causal inference. However, many of these approaches have their own limitations, which limit their use in many real-world scenarios. This article, therefore, also outlines a set of key open challenges and corresponding priorities for new research to advance the state of the art in visual causal inference.","authors":[{"affiliations":"","email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":"","email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":"","email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"award":"","doi":"10.1109/MCG.2023.3338788","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10414267","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10414267","image_caption":"","keywords":["Analytical Models, Correlation, Visual Analytics, Decision Making, Data Visualization, Reliability Theory, Cognition, Inference Algorithms, Causal Inference, Causality, Social Media, Exploratory Analysis, Data Visualization, Visual Representation, Visual Analysis, Visualization Tool, Open Challenges, Interactive Visualization, Assembly Line, Different Subsets Of Data, Visual Analytics Tool, Data Driven Decision Making, Data Quality, Statistical Models, Causal Effect, Visual System, Use Of Social Media, Bar Charts, Causal Model, Causal Graph, Chart Types, Directed Acyclic Graph, Visual Design, Portion Of The Dataset, Causal Structure, Prior Section, Causal Explanations, Line Graph"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Using Counterfactuals to Improve Causal Inferences From Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10478355","abstract":"Recent developments in artificial intelligence (AI) and machine learning (ML) have led to the creation of powerful generative AI methods and tools capable of producing text, code, images, and other media in response to user prompts. Significant interest in the technology has led to speculation about what fields, including visualization, can be augmented or replaced by such approaches. However, there remains a lack of understanding about which visualization activities may be particularly suitable for the application of generative AI. Drawing on examples from the field, we map current and emerging capabilities of generative AI across the different phases of the visualization lifecycle and describe salient opportunities and challenges.","authors":[{"affiliations":"","email":"rahul.basole@accenture.com","is_corresponding":false,"name":"Rahul C. Basole"},{"affiliations":"","email":"timothy.major@accenture.com","is_corresponding":true,"name":"Timothy Major"}],"award":"","doi":"10.1109/MCG.2024.3362168","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10478355","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10478355","image_caption":"","keywords":["Generative AI, Art, Artificial Intelligence, Machine Learning, Visualization, Media, Augmented Reality, Machine Learning, Visual Representation, Professional Knowledge, Creative Process, Domain Experts, Generalization Capability, Development Of Artificial Intelligence, Artificial Intelligence Capabilities, Iterative Process, Natural Language, Commercial Software, Hallucinations, Team Sports, Design Requirements, Intelligence Agencies, Recommender Systems, User Requirements, Iterative Design, Use Of Artificial Intelligence, Visual Design, Phase Assemblage, Data Literacy"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Generative AI for Visualization: Opportunities and Challenges","youtube_ff_id":null,"youtube_ff_url":null}] +[{"UID":"v-short-1040","abstract":"From dirty data to intentional deception, there are many threats to the validity of data-driven decisions. Making use of data, especially new or unfamiliar data, therefore requires a degree of trust or verification. How is this trust established? In this paper, we present the results of a series of interviews with both producers and consumers of data artifacts (outputs of data ecosystems like spreadsheets, charts, and dashboards) aimed at understanding strategies and obstacles to building trust in data. We find a recurring need, but lack of existing standards, for data validation and verification, especially among data consumers. We therefore propose a set of data guards: methods and tools for fostering trust in data artifacts.","authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"nicole.sultanum@gmail.com","is_corresponding":true,"name":"Nicole Sultanum"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"bromley.denny@gmail.com","is_corresponding":false,"name":"Dennis Bromley"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1040","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Data Guards: Challenges and Solutions for Fostering Trust in Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1047","abstract":"In the rapidly evolving field of deep learning, the traditional methodologies for designing deep learning models predominantly rely on code-based frameworks. While these approaches provide flexibility, they also create a significant barrier to entry for non-experts and obscure the immediate impact of architectural decisions on model performance. In response to this challenge, recent no-code approaches have been developed with the aim of enabling easy model development through graphical interfaces. However, both traditional and no-code methodologies share a common limitation that the inability to predict model outcomes or identify issues without executing the model. To address this limitation, we introduce an intuitive visual feedback-based no-code approach to visualize and analyze deep learning models during the design phase. This approach utilizes dataflow-based visual programming with dynamic visual encoding of model architecture. A user study was conducted with deep learning developers to demonstrate the effectiveness of our approach in enhancing the model design process, improving model understanding, and facilitating a more intuitive development experience. The findings of this study suggest that real-time architectural visualization significantly contributes to more efficient model development and a deeper understanding of model behaviors.","authors":[{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of","Korea University, Seoul, Korea, Republic of"],"email":"juny0603@gmail.com","is_corresponding":true,"name":"JunYoung Choi"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of"],"email":"wings159@vience.co.kr","is_corresponding":false,"name":"Sohee Park"},{"affiliations":["Korea University, Seoul, Korea, Republic of"],"email":"hellenkoh@gmail.com","is_corresponding":false,"name":"GaYeon Koh"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of"],"email":"k0seo0330@vience.co.kr","is_corresponding":false,"name":"Youngseo Kim"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of","Korea University, Seoul, Korea, Republic of"],"email":"wkjeong@korea.ac.kr","is_corresponding":false,"name":"Won-Ki Jeong"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1047","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Intuitive Design of Deep Learning Models through Visual Feedback","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1049","abstract":"This comparative study evaluates various neural surface reconstruction methods, particularly focusing on their implications for scientific visualization through reconstructing 3D surfaces via multi-view rendering images. We categorize ten methods into neural radiance fields and neural implicit surfaces, uncovering the benefits of leveraging distance functions (i.e., SDFs and UDFs) to enhance the accuracy and smoothness of the reconstructed surfaces. Our findings highlight the efficiency and quality of NeuS2 for reconstructing closed surfaces and identify NeUDF as a promising candidate for reconstructing open surfaces despite some limitations. We further pinpoint directions for future research, including improving detail capture, optimizing UDF computations, and refining surface extraction methods. By sharing our benchmark dataset, we invite researchers to test the performance of their methods, contributing to the advancement of surface reconstruction solutions for scientific visualization.","authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"syao2@nd.edu","is_corresponding":true,"name":"Siyuan Yao"},{"affiliations":["Wuhan University, Wuhan, China"],"email":"song.wx@whu.edu.cn","is_corresponding":false,"name":"Weixi Song"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1049","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"A Comparative Study of Neural Surface Reconstruction for Scientific Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1054","abstract":"Direct volume rendering using ray-casting is widely used in practice. By using GPUs and applying acceleration techniques as empty space skipping, high frame rates are possible on modern hardware. This enables performance-critical use-cases such as virtual reality volume rendering. The currently fastest known technique uses volumetric distance maps to skip empty sections of the volume during ray-casting but requires the distance map to be updated per transfer function change. In this paper, we demonstrate a technique for subdividing the volume intensity range into partitions and deriving what we call partitioned distance maps. These can be used to accelerate the distance map computation for a newly changed transfer function by a factor up to 30. This allows the currently fastest known empty space skipping approach to be used while maintaining high frame rates even when the transfer function is changed frequently.","authors":[{"affiliations":["University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria"],"email":"michael.rauter@fhwn.ac.at","is_corresponding":true,"name":"Michael Rauter"},{"affiliations":["Medical University of Vienna, Vienna, Austria"],"email":"lukas.a.zimmermann@meduniwien.ac.at","is_corresponding":false,"name":"Lukas Zimmermann PhD"},{"affiliations":["University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria"],"email":"markus.zeilinger@fhwn.ac.at","is_corresponding":false,"name":"Markus Zeilinger PhD"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1054","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Accelerating Transfer Function Update for Distance Map based Volume Rendering","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1056","abstract":"We present FCNR, a fast compressive neural representation for tens of thousands of visualization images under varying viewpoints and timesteps. The existing NeRVI solution, albeit enjoying a high compression rate, incurs slow speeds in encoding and decoding. Built on the recent advances in stereo image compression, FCNR assimilates stereo context modules and joint context transfer modules to compress image pairs. Our solution significantly improves encoding and decoding speed while maintaining high reconstruction quality and satisfying compression rate. To demonstrate its effectiveness, we compare FCNR with state-of-the-art neural compression methods, including E-NeRV, HNeRV, NeRVI, and ECSIC.","authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"ylu25@nd.edu","is_corresponding":true,"name":"Yunfei Lu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"pgu@nd.edu","is_corresponding":false,"name":"Pengfei Gu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1056","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"FCNR: Fast Compressive Neural Representation of Visualization Images","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1057","abstract":"Real-world datasets often consist of quantitative and categorical variables. The analyst needs to focus on either kind separately or both jointly. We proposed a visualization technique tackling these challenges that supports visual cluster and set analysis. In this paper, we investigate how its visualization parameters affect the accuracy and speed of cluster and set analysis tasks in a controlled experiment. Our findings show that, with the proper settings, our visualization can support both task types well. However, we did not find settings suitable for the joint task, which provides opportunities for future research.","authors":[{"affiliations":["TU Wien, Vienna, Austria"],"email":"nikolaus.piccolotto@tuwien.ac.at","is_corresponding":true,"name":"Nikolaus Piccolotto"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"mwallinger@ac.tuwien.ac.at","is_corresponding":false,"name":"Markus Wallinger"},{"affiliations":["Institute of Visual Computing and Human-Centered Technology, Vienna, Austria"],"email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"markus.boegl@tuwien.ac.at","is_corresponding":false,"name":"Markus B\u00f6gl"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1057","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"On Combined Visual Cluster and Set Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1058","abstract":"Semantic interaction (SI) in Dimension Reduction (DR) of images allows users to incorporate feedback through direct manipulation of the 2D positions of images. Through interaction, users specify a set of pairwise relationships that the DR should aim to capture. Existing methods for images incorporate feedback into the DR through feature weights on abstract embedding features. However, if the original embedding features do not suitably capture the users task then the DR cannot either. We propose, ImageSI, an SI method for image DR that incorporates user feedback directly into the image model to update the underlying embeddings, rather than weighting them. In doing so, ImageSI ensures that the embeddings suitably capture the features necessary for the task so that the DR can subsequently organize images using those features. We present two variations of ImageSI using different loss functions - ImageSI_MDS-Inverse , which prioritizes the explicit pairwise relationships from the interaction and ImageSI_Triplet, which prioritizes clustering, using the interaction to define groups of images. Finally, we present a usage scenario and a simulation-based evaluation to demonstrate the utility of ImageSI and compare it to current methods.","authors":[{"affiliations":["Vriginia Tech, Blacksburg, United States"],"email":"jiayuelin@vt.edu","is_corresponding":false,"name":"Jiayue Lin"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":true,"name":"Rebecca Faust"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1058","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"ImageSI: Semantic Interaction for Deep Learning Image Projections","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1059","abstract":"Gantt charts are a widely-used idiom for visualizing temporal discrete event sequence data where dependencies exist between events. They are popular in domains such as manufacturing and computing for their intuitive layout of such data. However, these domains frequently generate data at scales which tax both the visual representation and the ability to render it at interactive speeds. To aid visualization developers who use Gantt charts in these situations, we develop a task taxonomy of low level visualization tasks supported by Gantt charts and connect them to the data queries needed to support them. Our taxonomy is derived through a systematic literature survey of visualizations using Gantt charts over the past 30 years.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"sayefsakin@sci.utah.edu","is_corresponding":true,"name":"Sayef Azad Sakin"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1059","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"A Literature-based Visualization Task Taxonomy for Gantt charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1062","abstract":"Annotations are a critical component of visualizations, helping viewers interpret the visual representation and highlighting critical data insights. Despite its significant role, we lack an understanding of how annotations can be incorporated into other data representations, such as physicalizations and sonifications. Given the emergent nature of these representations, sonifications and physicalizations lack formalized conventions (e.g., design space, vocabulary) that can introduce challenges for audiences to interpret the intended data encoding. To address this challenge, this work focuses on how annotations can be more tightly integrated into the design process of creating sonifications and physicalization. In an exploratory study with 13 designers, we explore how visualization annotation techniques can be adapted to sonic and physical modalities. Our work highlights how annotations for sonification and physicalizations are inseparable from their data encodings","authors":[{"affiliations":["Whitman College, Walla Walla, United States"],"email":"sorensor@whitman.edu","is_corresponding":false,"name":"Rhys Sorenson-Graff"},{"affiliations":["University of Colorado Boulder, Boulder, United States"],"email":"sandra.bae@colorado.edu","is_corresponding":true,"name":"S. Sandra Bae"},{"affiliations":["Whitman College, Walla Walla, United States"],"email":"wirfsbro@colorado.edu","is_corresponding":false,"name":"Jordan Wirfs-Brock"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1062","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Integrating Annotations into the Design Process for Sonifications and Physicalizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1064","abstract":"Large Language Models (LLMs) have demonstrated remarkable versatility in visualization authoring, but often generate suboptimal designs that are invalid or fail to adhere to design guidelines for effective visualization. We present Bavisitter, a natural language interface that integrates established visualization design guidelines into LLMs. Based on our survey on the design issues in LLM-generated visualizations, Bavisitter monitors the generated visualizations during a visualization authoring dialogue to detect an issue. When an issue is detected, it intervenes in the dialogue, suggesting possible solutions to the issue by modifying the prompts. We also demonstrate two use cases where Bavisitter detects and resolves design issues from the actual LLM-generated visualizations.","authors":[{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jiwnchoi@skku.edu","is_corresponding":true,"name":"Jiwon Choi"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"dlwodnd00@skku.edu","is_corresponding":false,"name":"Jaeung Lee"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jmjo@skku.edu","is_corresponding":false,"name":"Jaemin Jo"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1064","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1065","abstract":"Although many dimensionality reduction (DR) techniques employ stochastic methods for computational efficiency, such as negative sampling or stochastic gradient descent, their impact on the projection has been underexplored. In this work, we investigate how such stochasticity affects the stability of projections and present a novel DR technique, GhostUMAP, to measure the pointwise instability of projections. Our idea is to introduce clones of data points, \"ghosts\", into UMAP's layout optimization process. Ghosts are designed to be completely passive: they do not affect any others but are influenced by attractive and repulsive forces from the original data points. After a single optimization run, GhostUMAP can capture the projection instability of data points by measuring the variance with the projected positions of their ghosts. We also present a successive halving technique to reduce the computation of GhostUMAP. Our results suggest that GhostUMAP can reveal unstable data points with a reasonable computational overhead.","authors":[{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"mw.jung@skku.edu","is_corresponding":true,"name":"Myeongwon Jung"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"takanori.fujiwara@liu.se","is_corresponding":false,"name":"Takanori Fujiwara"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jmjo@skku.edu","is_corresponding":false,"name":"Jaemin Jo"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1065","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1068","abstract":"Integrating textual content, such as titles, annotations, and captions, with visualizations facilitates comprehension and takeaways during data exploration. Yet current tools often lack mechanisms for integrating meaningful text with visual data. This paper introduces DASH, a bimodal data exploration tool that supports integrating semantic levels into the interactive process of visualization and text-based analysis. DASH operationalizes a modified version of Lundgard et al.'s semantic hierarchy model that categorizes data descriptions into four levels ranging from basic encodings to high-level insights. By leveraging this structured semantic level framework and a large language model's text generation capabilities, DASH enables the creation of data-driven narratives via drag-and-drop user interaction. Through a preliminary user evaluation, we discuss the utility of DASH's text and chart integration capabilities when participants perform data exploration with the tool. Based on the study's feedback and observations, we discuss implications for designing unified text and chart authoring tools.","authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"bromley.denny@gmail.com","is_corresponding":true,"name":"Dennis Bromley"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1068","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1072","abstract":"Recent advancements in vision models have significantly enhanced their ability to perform complex chart understanding tasks, such as chart captioning and chart question answering. However, assessing how these models process charts remains challenging. Existing benchmarks only coarsely evaluate how well the model performs the given task without thoroughly evaluating the underlying mechanisms that drive performance, such as how models extract image embeddings. This gap limits our understanding of the model's perceptual capabilities regarding fundamental graphical components. Therefore, we introduce a novel evaluation framework designed to assess the graphical perception of image embedding models. In the context of chart comprehension, we examine two main aspects of channel effectiveness: accuracy and discriminability of various visual channels. We first assess channel accuracy through the linearity of embeddings, which is the degree to which the perceived magnitude is proportional to the size of the stimulus. % based on the assumption that perceived magnitude should be proportional to the size of Conversely, distances between embeddings serve as a measure of discriminability; embeddings that are far apart can be considered discriminable. Our experiments on a general image embedding model, CLIP, provided that it perceives channel accuracy differently from humans and demonstrated distinct discriminability in specific channels such as length, tilt, and curvature. We aim to extend our work as a more general benchmark for reliable visual encoders and enhance a model for two distinctive goals for future applications: precise chart comprehension and mimicking human perception.","authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"dtngus0111@gmail.com","is_corresponding":true,"name":"Soohyun Lee"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jangsus1@snu.ac.kr","is_corresponding":false,"name":"Minsuk Chang"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"shpark@hcil.snu.ac.kr","is_corresponding":false,"name":"Seokhyeon Park"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1072","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1078","abstract":"Data visualizations are reaching global audiences. As people who use Right-to-left (RTL) scripts constitute over a billion potential data visualization users, a need emerges to investigate how visualizations are communicated to them. Web design guidelines exist to assist designers in adapting different reading directions, yet we lack a similar standard for visualization design. This paper investigates the design patterns of visualizations with RTL scripts. We collected 128 visualizations from data-driven articles published in Arabic news outlets and analyzed their chart composition, textual elements, and sources. Our analysis suggests that designers tend to apply RTL approaches more frequently for categorical data. In other situations, we observed a mix of Left-to-right (LTR) and RTL approaches for chart directions and structures, sometimes inconsistently utilized within the same article. We reflect on this lack of clear guidelines for RTL data visualizations and derive implications for visualization authoring tools and future research directions.","authors":[{"affiliations":["University College London, London, United Kingdom","UAE University , Al Ain, United Arab Emirates"],"email":"muna.alebri.19@ucl.ac.uk","is_corresponding":true,"name":"Muna Alebri"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ntrakotondravony@wpi.edu","is_corresponding":false,"name":"No\u00eblle Rakotondravony"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ltharrison@wpi.edu","is_corresponding":false,"name":"Lane Harrison"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1078","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1079","abstract":"Image datasets serve as the foundation for machine learning models in computer vision, significantly influencing model capabilities, performance, and biases alongside architectural considerations. Therefore, understanding the composition and distribution of these datasets has become increasingly crucial. To address the need for intuitive exploration of these datasets, we propose AEye, an extensible and scalable visualization tool tailored to image datasets. AEye utilizes a contrastively trained model to embed images into semantically meaningful high-dimensional representations, facilitating data clustering and organization. To visualize the high-dimensional representations, we project them onto a two-dimensional plane and arrange images in layers so users can seamlessly navigate and explore them interactively. Furthermore, AEye facilitates semantic search functionalities for both text and image queries, enabling users to search for content. We open-source the codebase for AEye, and provide a simple configuration to add additional datasets.","authors":[{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"fgroetschla@ethz.ch","is_corresponding":false,"name":"Florian Gr\u00f6tschla"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"lanzendoerfer@ethz.ch","is_corresponding":false,"name":"Luca A Lanzend\u00f6rfer"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"mcalzavara@student.ethz.ch","is_corresponding":false,"name":"Marco Calzavara"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"wattenhofer@ethz.ch","is_corresponding":false,"name":"Roger Wattenhofer"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1079","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"AEye: A Visualization Tool for Image Datasets","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1081","abstract":"Sine illusion happens when the more quickly changing pairs of lines lead to bigger underestimates of the delta between them. We evaluate three visual manipulations on mitigating sine illusions: dotted lines, aligned gridlines, and offset gridlines via a user study. We asked participants to compare the deltas between two lines at two time points and found aligned gridlines to be the most effective in mitigating sine illusions. Using data from the user study, we produced a model that predicts the impact of the sine illusion in line charts by accounting for the ratio of the vertical distance between the two points of comparison. When the ratio is less than 50\\%, participants begin to be influenced by the sine illusion. This effect can be significantly exacerbated when the difference between the two deltas falls under 30\\%. We compared two explanations for the sine illusion based on our data: either participants were mistakenly using the perpendicular distance between the two lines to make their comparison (the perpendicular explanation), or they incorrectly relied on the length of the line segment perpendicular to the angle bisector of the bottom and top lines (the equal triangle explanation). We found the equal triangle explanation to be the more predictive model explaining participant behaviors.","authors":[{"affiliations":["Google LLC, San Francisco, United States"],"email":"cknit1999@gmail.com","is_corresponding":false,"name":"Clayton J Knittel"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"jawuah3@gatech.edu","is_corresponding":false,"name":"Jane Awuah"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"franconeri@northwestern.edu","is_corresponding":false,"name":"Steven L Franconeri"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"cxiong@gatech.edu","is_corresponding":true,"name":"Cindy Xiong Bearfield"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1081","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Gridlines Mitigate Sine Illusion in Line Charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1089","abstract":"In healthcare, AI techniques are widely used for tasks like risk assessment and anomaly detection. Despite AI's potential as a valuable assistant, its role in complex medical data analysis often oversimplifies human-AI collaboration dynamics. To address this, we collaborated with a local hospital, engaging six physicians and one data scientist in a formative study. From this collaboration, we propose a framework integrating two-phase interactive visualization systems: one for Human-Led, AI-Assisted Retrospective Analysis and another for AI-Mediated, Human-Reviewed Iterative Modeling. This framework aims to enhance understanding and discussion around effective human-AI collaboration in healthcare.","authors":[{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"ouyy@shanghaitech.edu.cn","is_corresponding":true,"name":"Yang Ouyang"},{"affiliations":["University of Illinois at Urbana-Champaign, Champaign, United States","University of Illinois at Urbana-Champaign, Champaign, United States"],"email":"zhang414@illinois.edu","is_corresponding":false,"name":"Chenyang Zhang"},{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"wanghe1@shanghaitech.edu.cn","is_corresponding":false,"name":"He Wang"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"15301050137@fudan.edu.cn","is_corresponding":false,"name":"Tianle Ma"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"cjiang_fdu@yeah.net","is_corresponding":false,"name":"Chang Jiang"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"522649732@qq.com","is_corresponding":false,"name":"Yuheng Yan"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"yan.zuoqin@zs-hospital.sh.cn","is_corresponding":false,"name":"Zuoqin Yan"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong","Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"mxj@cse.ust.hk","is_corresponding":false,"name":"Xiaojuan Ma"},{"affiliations":["Southeast University, Nanjing, China","Southeast University, Nanjing, China"],"email":"cshiag@connect.ust.hk","is_corresponding":false,"name":"Chuhan Shi"},{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1089","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1090","abstract":"Visualizing high dimensional data is challenging, since any dimensionality reduction technique will distort distances. A classic method in cartography\u2013Tissot\u2019s Indicatrix, specific to sphere-to-plane maps\u2013visualizes distortion using ellipses. Inspired by this idea, we describe the hypertrix: a method for representing distortions that occur when data is projected from arbitrarily high dimensions onto a 2D plane. We demonstrate our technique through synthetic and real-world datasets, and describe how this indicatrix can guide interpretations of nonlinear dimensionality reduction","authors":[{"affiliations":["Harvard University, Boston, United States"],"email":"sraval@g.harvard.edu","is_corresponding":true,"name":"Shivam Raval"},{"affiliations":["Harvard University, Cambridge, United States","Google Research, Cambridge, United States"],"email":"viegas@google.com","is_corresponding":false,"name":"Fernanda Viegas"},{"affiliations":["Harvard University, Cambridge, United States","Google Research, Cambridge, United States"],"email":"wattenberg@gmail.com","is_corresponding":false,"name":"Martin Wattenberg"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1090","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Hypertrix: An indicatrix for high-dimensional visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1096","abstract":"Coordinated multiple views (CMV) in a visual analytics system can help users explore multiple data representations simultaneously with linked interactions. However, the implementation of coordinated multiple views can be challenging. Without standard software libraries, visualization designers need to re-implement CMV during the development of each system. We introduce use-coordination, a grammar and software library that supports the efficient implementation of CMV. The grammar defines a JSON-based representation for an abstract coordination model from the information visualization literature. We contribute an optional extension to the model and grammar that allows for hierarchical coordination. Through three use cases, we show that use-coordination enables implementation of CMV in systems containing not only basic statistical charts but also more complex visualizations such as medical imaging volumes. We describe six software extensions, including a graphical editor for manipulation of coordination, which showcase the potential to build upon our coordination-focused declarative approach.","authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"mark_keller@hms.harvard.edu","is_corresponding":true,"name":"Mark S Keller"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"trevor_manz@g.harvard.edu","is_corresponding":false,"name":"Trevor Manz"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1096","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1097","abstract":"Visualization tools now commonly present automated insights highlighting salient data patterns, including correlations, distributions, outliers, and differences, among others. While these insights are valuable for data exploration and chart interpretation, users currently only have a binary choice of accepting or rejecting them, lacking the flexibility to refine the system logic or customize the insight generation process. To address this limitation, we present GROOT, a prototype system that allows users to proactively specify and refine automated data insights. The system allows users to directly manipulate chart elements to receive insight recommendations based on their selections. Additionally, GROOT provides users with a manual editing interface to customize, reconfigure, or add new insights to individual charts and propagate them to future explorations. We describe a usage scenario to illustrate how these features collectively support insight editing and configuration, and discuss opportunities for future work including incorporating LLMs, improving semantic data and visualization search, and supporting insight management.","authors":[{"affiliations":["University of Maryland, College Park, College Park, United States","Tableau Research, Seattle, United States"],"email":"sgathani@cs.umd.edu","is_corresponding":true,"name":"Sneha Gathani"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"amcrisan@uwaterloo.ca","is_corresponding":false,"name":"Anamaria Crisan"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"arjun.srinivasan.10@gmail.com","is_corresponding":false,"name":"Arjun Srinivasan"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1097","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Groot: An Interface for Editing and Configuring Automated Data Insights","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1100","abstract":"Confidence scores of automatic speech recognition (ASR) outputs are often inadequately communicated, preventing its seamless integration into analytical workflows. In this paper, we introduce ConFides, a visual analytic system developed in collaboration with intelligence analysts to address this issue. ConFides aims to aid exploration and post-AI-transcription editing by visually representing the confidence associated with the transcription. We demonstrate how our tool can assist intelligence analysts who use ASR outputs in their analytical and exploratory tasks and how it can help mitigate misinterpretation of crucial information. We also discuss opportunities for improving textual data cleaning and model transparency for human-machine collaboration.","authors":[{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"sha@wustl.edu","is_corresponding":true,"name":"Sunwoo Ha"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"chaelim@wustl.edu","is_corresponding":false,"name":"Chaehun Lim"},{"affiliations":["Smith College, Northampton, United States"],"email":"jcrouser@smith.edu","is_corresponding":false,"name":"R. Jordan Crouser"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"alvitta@wustl.edu","is_corresponding":false,"name":"Alvitta Ottley"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1100","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"ConFides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1101","abstract":"Color coding, a technique assigning specific colors to different information types, has proven advantages in aiding human cognitive activities, especially reading and comprehension. The rise of Large Language Models (LLMs) has streamlined document coding, enabling simple automatic text labeling with various schemes. This has the potential to make color-coding more accessible and benefit more users. However, the importance of color choice, particularly in aiding textual information seeking through various color schemes, is not well studied. This paper presents a user study assessing the effectiveness of various color schemes generated by different base colors for readers' information-seeking performance in text documents color-coded by LLMs. Participants performed information-seeking tasks within scholarly papers' abstracts, each coded with a different scheme under time constraints. Results showed that non-analogous color schemes lead to better information-seeking performance, in both accuracy and response time. Yellow-inclusive color schemes lead to shorter response times and are also preferred by most participants. These could inform the better choice of color scheme for annotating text documents. As LLMs advance document coding, we advocate for more research focusing on the \"color\" aspect of color-coding techniques.","authors":[{"affiliations":["Pennsylvania State University, University Park, United States"],"email":"samnghoyin@gmail.com","is_corresponding":true,"name":"Ho Yin Ng"},{"affiliations":["Pennsylvania State University, University Park, United States"],"email":"zmh5268@psu.edu","is_corresponding":false,"name":"Zeyu He"},{"affiliations":["Pennsylvania State University, University Park , United States"],"email":"txh710@psu.edu","is_corresponding":false,"name":"Ting-Hao Kenneth Huang"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1101","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1109","abstract":"Homophily refers to the tendency of individuals to associate with others who are similar to them in characteristics, such as, race, ethnicity, age, gender, or interests. In this paper, we investigate if individuals exhibit racial homophily when viewing visualizations, using mass shooting data in the United States as the example topic. We conducted a crowdsourced experiment (N=450) where each participant was shown a visualization displaying the counts of mass shooting victims, highlighting the counts for one of three racial groups (White, Black, or Hispanic). Participants were assigned to view visualizations highlighting their own race or a different race to assess the influence of racial concordance on changes in affect (emotion) and attitude towards gun control. While we did not find evidence of homophily, the results showed a significant negative shift in affect across all visualization conditions. Notably, political ideology significantly impacted changes in affect, with more liberal views correlating with a more negative affect change. Our findings underscore the complexity of reactions to mass shooting visualizations and highlight the need for additional measures for understanding homophily in visualizations.","authors":[{"affiliations":["New York University, Brooklyn, United States"],"email":"pt2393@nyu.edu","is_corresponding":true,"name":"Poorna Talkad Sukumar"},{"affiliations":["New York University, Brooklyn, United States"],"email":"mporfiri@nyu.edu","is_corresponding":false,"name":"Maurizio Porfiri"},{"affiliations":["New York University, New York, United States"],"email":"onov@nyu.edu","is_corresponding":false,"name":"Oded Nov"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1109","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Connections Beyond Data: Exploring Homophily With Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1114","abstract":"As visualization literacy and its implications gain prominence, we need effective methods to teach and prepare students for the variety of visualizations they might encounter in an increasingly data-driven world. Recently, the potential of comics has been recognized in various data visualization contexts, including educational settings. In this paper, we describe the development of a workshop in which we use our \u201ccomic construction kit\u201d as a tool for students to understand various data visualization techniques through an interactive creative approach of creating explanatory comics. We report on our insights and learnings from holding eight workshops with high school students, high school teachers, university students, and university lecturers, aiming to enhance the landscape of hands-on visualization activities that can enrich the visualization classroom. The comic construction kit and all supplemental materials are open source under a CC-BY license and available at https://fhstp.github.io/comixplain/vis4schools.html.","authors":[{"affiliations":["St. P\u00f6lten University of Applied Sciences, St. P\u00f6lten, Austria"],"email":"magdalena.boucher@fhstp.ac.at","is_corresponding":true,"name":"Magdalena Boucher"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"christina.stoiber@fhstp.ac.at","is_corresponding":false,"name":"Christina Stoiber"},{"affiliations":["School of Informatics, Communications and Media, Hagenberg im M\u00fchlkreis, Austria"],"email":"mandy.keck@fh-hagenberg.at","is_corresponding":false,"name":"Mandy Keck"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"victor.oliveira@fhstp.ac.at","is_corresponding":false,"name":"Victor Adriel de Jesus Oliveira"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"wolfgang.aigner@fhstp.ac.at","is_corresponding":false,"name":"Wolfgang Aigner"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1114","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1116","abstract":"Visualizations support rapid analysis of scientific datasets, allowing viewers to glean aggregate information (e.g., the mean) within split-seconds. While prior research has explored this ability in conventional charts, it is unclear if spatial visualizations used by computational scientists afford a similar ensemble perception capacity. We investigate people's ability to estimate two summary statistics, mean and variance, from pseudocolor scalar fields. In a crowdsourced experiment, we find that participants can reliably characterize both statistics, although variance discrimination requires a much stronger signal. Multi-hue and diverging colormaps outperformed monochromatic, luminance ramps in aiding this extraction. Analysis of qualitative responses suggests that participants often estimate the distribution of hotspots and valleys as visual proxies for data statistics. These findings suggest that people's summary interpretation of spatial datasets is likely driven by the appearance of discrete color segments, rather than assessments of overall luminance. Implicit color segmentation in quantitative displays could thus prove more useful than previously assumed by facilitating quick, gist-level judgments about color-coded visualizations.","authors":[{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"vmateevitsi@anl.gov","is_corresponding":false,"name":"Victor A. Mateevitsi"},{"affiliations":["Argonne National Laboratory, Lemont, United States","University of Illinois Chicago, Chicago, United States"],"email":"papka@anl.gov","is_corresponding":false,"name":"Michael E. Papka"},{"affiliations":["Indiana University, Indianapolis, United States"],"email":"redak@iu.edu","is_corresponding":true,"name":"Khairi Reda"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1116","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Science in a Blink: Supporting Ensemble Perception in Scalar Fields","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1117","abstract":"Geovisualizations are powerful tools for exploratory spatial analysis, enabling sighted users to discern patterns, trends, and relationships within geographic data. However, these visual tools have remained largely inaccessible to screen-reader users. We present AltGeoViz, a new system we designed to facilitate geovisualization exploration for these users. AltGeoViz dynamically generates alt-text descriptions based on the user's current map view, providing summaries of spatial patterns and descriptive statistics. In a study of five screen-reader users, we found that AltGeoViz enabled them to interact with geovisualizations in previously infeasible ways. Participants demonstrated a clear understanding of data summaries and their location context, and they could synthesize spatial understandings of their explorations. Moreover, we identified key areas for improvement, such as the addition of intuitive spatial navigation controls and comparative analysis features.","authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"chuchuli@cs.washington.edu","is_corresponding":true,"name":"Chu Li"},{"affiliations":["University of Washington, Seattle, United States"],"email":"ypang2@cs.washington.edu","is_corresponding":false,"name":"Rock Yuren Pang"},{"affiliations":["University of Washington, Seattle, United States"],"email":"asharif@cs.washington.edu","is_corresponding":false,"name":"Ather Sharif"},{"affiliations":["University of Washington, Seattle, United States"],"email":"chheda@cs.washington.edu","is_corresponding":false,"name":"Arnavi Chheda-Kothary"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jonf@cs.uw.edu","is_corresponding":false,"name":"Jon E. Froehlich"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1117","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"AltGeoViz: Facilitating Accessible Geovisualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1119","abstract":"Analyzing uncertainty in spatial data is a vital task in many domains, as for example with climate and weather simulation ensembles. Although there are many methods to support the analysis of the uncertainty, such as uncertain isocontours or calculation of statistical values, it is still a challenge to get an overview of the uncertainty and then decide a further method or parameter to analyze the data, or investigate further some region or point of interest. We present cumulative height fields, a visualization method for 2D scalar field ensembles using the marginal empirical distribution function and show preliminary results using volume rendering and slicing for the Max Planck Institute Grand Ensemble.","authors":[{"affiliations":["Institute of Computer Science, Leipzig University, Leipzig, Germany"],"email":"daetz@informatik.uni-leipzig.de","is_corresponding":true,"name":"Tomas Rodolfo Daetz Chacon"},{"affiliations":["German Climate Computing Center (DKRZ), Hamburg, Germany"],"email":"boettinger@dkrz.de","is_corresponding":false,"name":"Michael B\u00f6ttinger"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"scheuermann@informatik.uni-leipzig.de","is_corresponding":false,"name":"Gerik Scheuermann"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"heine@informatik.uni-leipzig.de","is_corresponding":false,"name":"Christian Heine"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1119","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1121","abstract":"Many real-world networks contain structurally-equivalent nodes. These are defined as vertices that share the same set of neighboring nodes, making them interchangeable with a traditional graph layout approach. However, many real-world graphs also have properties associated with nodes, adding additional meaning to them. We present an approach for swapping locations of structurally-equivalent nodes in graph layout so that those with more similar properties have closer proximity to each other. This improves the usefulness of the visualization from an attribute perspective without negatively impacting the visualization from a structural perspective. We include an algorithm for finding these sets of nodes in linear time, as well as methodologies for ordering nodes based on their attribute similarity, which works for scalar, ordinal, multidimensional, and categorical data.","authors":[{"affiliations":["Pacific Northwest National Lab, Richland, United States"],"email":"patrick.mackey@pnnl.gov","is_corresponding":true,"name":"Patrick Mackey"},{"affiliations":["University of Arizona, Tucson, United States","Pacific Northwest National Laboratory, Richland, United States"],"email":"jacobmiller1@arizona.edu","is_corresponding":false,"name":"Jacob Miller"},{"affiliations":["Pacific Northwest National Laboratory, Richland, United States"],"email":"liz.f@pnnl.gov","is_corresponding":false,"name":"Liz Faultersack"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1121","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1126","abstract":"Psychological research often involves understanding psychological constructs through conducting factor analysis on data collected by a questionnaire, which can comprise hundreds of questions. Without interactive systems for interpreting factor models, researchers are frequently exposed to subjectivity, potentially leading to misinterpretations or overlooked crucial information. This paper introduces FAVis, a novel interactive visualization tool designed to aid researchers in interpreting and evaluating factor analysis results. FAVis enhances the understanding of relationships between variables and factors by supporting multiple views for visualizing factor loadings and correlations, allowing users to analyze information from various perspectives. The primary feature of FAVis is to enable users to set optimal thresholds for factor loadings to balance clarity and information retention. FAVis also allows users to assign tags to variables, enhancing the understanding of factors by linking them to their associated psychological constructs. We conduct a case study on a dataset from the Motivational State Questionnaire, utilizing a three-factor common factor model. Our user study demonstrates the utility of FAVis in various tasks.","authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States","University of Notre Dame, Notre Dame, United States"],"email":"ylu22@nd.edu","is_corresponding":true,"name":"Yikai Lu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1126","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"FAVis: Visual Analytics of Factor Analysis for Psychological Research","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1127","abstract":"In this paper, we analyze the Apple Vision Pro hardware and the visionOS software platform, assessing their capabilities for volume rendering of structured grids, a prevalent technique across various applications. The Apple Vision Pro supports multiple display modes, from classical augmented reality (AR) using video see-through technology to immersive virtual reality (VR) environments that exclusively render virtual objects. These modes utilize different APIs and exhibit distinct capabilities. Our focus is on direct volume rendering, selected for its implementation challenges due to the native graphics APIs being predominantly oriented towards surface shading. Volume rendering is particularly vital in fields where AR and VR visualizations offer substantial benefits, such as in medicine and manufacturing. Despite its initial high cost, we anticipate that the Vision Pro will become more accessible and affordable over time, following Apple's track record of market expansion. As these devices become more prevalent, understanding how to effectively program and utilize them becomes increasingly important, offering significant opportunities for innovation and practical applications in various sectors.","authors":[{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"camilla.hrycak@uni-due.de","is_corresponding":true,"name":"Camilla Hrycak"},{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"david.lewakis@stud.uni-due.de","is_corresponding":false,"name":"David Lewakis"},{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"jens.krueger@uni-due.de","is_corresponding":false,"name":"Jens Harald Krueger"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1127","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1130","abstract":"Visualization, from simple line plots to complex high-dimensional visual analysis systems, has established itself throughout numerous domains to explore, analyze, and evaluate data. Applying such visualizations in the context of simulation science where High-Performance Computing (HPC) produces ever-growing amounts of data that is more complex, potentially multidimensional, and multi-modal, takes up resources and a high level of technological experience often not available to domain experts. In this work, we present DaVE - a curated database of visualization examples, which aims to provide state-of-the-art and advanced visualization methods that arise in the context of HPC applications. Based on domain- or data-specific descriptors entered by the user, DaVE provides a list of appropriate visualization techniques, each accompanied by descriptions, examples, references, and resources. Sample code, adaptable container templates, and recipes for easy integration in HPC applications can be downloaded for easy access to high-fidelity visualizations. While the database is currently filled with a limited number of entries based on a broad evaluation of needs and challenges of current HPC users, DaVE is designed to be easily extended by experts from both the visualization and HPC communities.","authors":[{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"koenen@informatik.rwth-aachen.de","is_corresponding":true,"name":"Jens Koenen"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"m.petersen@rptu.de","is_corresponding":false,"name":"Marvin Petersen"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"garth@rptu.de","is_corresponding":false,"name":"Christoph Garth"},{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"gerrits@vis.rwth-aachen.de","is_corresponding":false,"name":"Tim Gerrits"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1130","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"DaVE - A Curated Database of Visualization Examples","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1135","abstract":"Humans struggle to perceive and interpret high-dimensional data. Therefore, high-dimensional data are often projected into two dimensions for visualization. Many applications benefit from complex nonlinear dimensionality reduction techniques, but the effects of individual high-dimensional features are hard to explain in the two-dimensional space. Most visualization solutions use multiple two-dimensional plots, each showing the effect of one high-dimensional feature in two dimensions; this approach creates a need for a visual inspection of k plots for a k-dimensional input space. Our solution, Feature Clock, provides a novel approach that eliminates the need to inspect these k plots to grasp the influence of original features on the data structure depicted in two dimensions. Feature Clock enhances the explainability and compactness of visualizations of embedded data and is available in an open-source Python library.","authors":[{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"ovcharenko.folga@gmail.com","is_corresponding":true,"name":"Olga Ovcharenko"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"rita.sevastjanova@uni-konstanz.de","is_corresponding":false,"name":"Rita Sevastjanova"},{"affiliations":["ETH Zurich, Z\u00fcrich, Switzerland"],"email":"valentina.boeva@inf.ethz.ch","is_corresponding":false,"name":"Valentina Boeva"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1135","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Feature Clock: High-Dimensional Effects in Two-Dimensional Plots","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1144","abstract":"Reconstruction of 3D scenes from 2D images is a technical challenge that impacts domains from Earth and planetary sciences and space exploration to augmented and virtual reality. Typically, reconstruction algorithms first identify common features across images and then minimize reconstruction errors after estimating the shape of the terrain. This bundle adjustment (BA) step optimizes around a single, simplifying scalar value that obfuscates many possible causes of reconstruction errors (e.g., initial estimate of the position and orientation of the camera, lighting conditions, ease of feature detection in the terrain). Reconstruction errors can lead to inaccurate scientific inferences or endanger a spacecraft exploring a remote environment. To address this challenge, we present VECTOR, a visual analysis tool that improves error inspection for stereo reconstruction BA. VECTOR provides analysts with previously unavailable visibility into feature locations, camera pose, and computed 3D points. VECTOR was developed in partnership with the Perseverance Mars Rover and Ingenuity Mars Helicopter terrain reconstruction team at the NASA Jet Propulsion Laboratory. We report on how this tool was used to debug and improve terrain reconstruction for the Mars 2020 mission.","authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"racquel.fygenson@gmail.com","is_corresponding":false,"name":"Racquel Fygenson"},{"affiliations":["Weta FX, Auckland, New Zealand"],"email":"kjawad@andrew.cmu.edu","is_corresponding":false,"name":"Kazi Jawad"},{"affiliations":["Art Center, Pasadena, United States"],"email":"zongzhanisabelli@gmail.com","is_corresponding":false,"name":"Zongzhan Li"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"francois.ayoub@jpl.nasa.gov","is_corresponding":false,"name":"Francois Ayoub"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"bob.deen@jpl.nasa.gov","is_corresponding":false,"name":"Robert G Deen"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"sd@scottdavidoff.com","is_corresponding":false,"name":"Scott Davidoff"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["NASA-JPL, Pasadena, United States"],"email":"mauricio.a.hess.flores@jpl.nasa.gov","is_corresponding":true,"name":"Mauricio Hess-Flores"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1144","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Opening the black box of 3D reconstruction error analysis with VECTOR","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1146","abstract":"Millions of runners rely on smart watches that display running-related metrics such as pace, heart rate and distance for training and racing -- mostly with text and numbers. Although research tells us that visualizations are a good alternative to text on smart watches, we know little about how visualizations can help in realistic running scenarios. We conducted a study in which 20 runners completed running-related tasks on an outdoor track using both text and visualizations. Our results show that runners are 1.5 to 8 times faster in completing those tasks with visualizations than with text, prefer visualizations to text, and would use such visualizations while running -- were they available on their smart watch.","authors":[{"affiliations":["University of Victoria, Victoria, Canada"],"email":"sarinaksj@uvic.ca","is_corresponding":false,"name":"Sarina Kashanj"},{"affiliations":["University of Victoria, Victoira, Canada","Delft University of Technology, Delft, Netherlands"],"email":"xiyao.wang23@gmail.com","is_corresponding":false,"name":"Xiyao Wang"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1146","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Visualizations on Smart Watches while Running: It Actually Helps!","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1150","abstract":"Exploratory visual data analysis tools empower data analysts to efficiently and intuitively explore data insights throughout the entire analysis cycle. However, the gap between common programmatic analysis (e.g., within computational notebooks) and exploratory visual analysis leads to a disjointed and inefficient data analysis experience. To bridge this gap, we developed PyGWalker, a Python library that offers on-the-fly assistance for exploratory visual data analysis. It features a lightweight and intuitive GUI with a shelf builder modality. Its loosely coupled architecture supports multiple computational environments to accommodate varying data sizes. Since its release in February 2023, PyGWalker has gained much attention, with 468k downloads on PyPI and over 9.8k stars on GitHub as of April 2024. This demonstrates its value to the data science and visualization community, with researchers and developers integrating it into their own applications and studies.","authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China","Kanaries Data Inc., Hangzhou, China"],"email":"yue.yu@connect.ust.hk","is_corresponding":true,"name":"Yue Yu"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"lshenaj@connect.ust.hk","is_corresponding":false,"name":"Leixian Shen"},{"affiliations":["Kanaries Data Inc., Hangzhou, China"],"email":"feilong@kanaries.net","is_corresponding":false,"name":"Fei Long"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":["Kanaries Data Inc., Hangzhou, China"],"email":"haochen@kanaries.net","is_corresponding":false,"name":"Hao Chen"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1150","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1155","abstract":"Augmented reality (AR) area labels can highlight real-life objects, visualize real world regions with arbitrary boundaries, and show invisible objects or features. Environment conditions such as lighting and clutter can decrease fixed or passive label visibility, and labels that have high opacity levels can occlude crucial details in the environment. We design and evaluate active AR area label visualization modes to enhance visibility across real-life environments, while still retaining environment details within the label. For this, we define a distant characteristic color from the environment in perceptual CIELAB space, then introduce spatial variations among label pixel colors based on the underlying environment variation. In a user study with 18 participants, we discovered that our active label visualization modes can be comparable in visibility to a fixed green baseline by Gabbard et al., and can outperform it with added spatial variation in cluttered environments, across varying levels of lighting (e.g., nighttime), and in environments with colors similar to the fixed baseline color.","authors":[{"affiliations":["Brown University, Providence, United States"],"email":"hojung_kwon@brown.edu","is_corresponding":false,"name":"Hojung Kwon"},{"affiliations":["Brown University, Providence, United States"],"email":"yuanbo_li@brown.edu","is_corresponding":false,"name":"Yuanbo Li"},{"affiliations":["Brown University, Providence, United States"],"email":"chloe_ye2019@hotmail.com","is_corresponding":false,"name":"Xiaohan Ye"},{"affiliations":["Brown University, Providence, United States"],"email":"praccho_muna-mcquay@brown.edu","is_corresponding":false,"name":"Praccho Muna-McQuay"},{"affiliations":["Duke University, Durham, United States"],"email":"liuren.yin@duke.edu","is_corresponding":false,"name":"Liuren Yin"},{"affiliations":["Brown University, Providence, United States"],"email":"james_tompkin@brown.edu","is_corresponding":true,"name":"James Tompkin"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1155","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1156","abstract":"Compound graphs are networks in which vertices can be grouped into larger subsets, with these subsets capable of further grouping, resulting in a nesting that can be many levels deep. Such graphs arise in several applications including biological workflows, chemical equations, and computational data flow analysis. Common layouts prioritize the lowest level of the grouping, down to the individual ungrouped vertices, which can make the higher level grouped structures more difficult to discern, especially in deeply nested networks. We contribute an overview+detail layout that preserves the saliency of the higher level network structure when groups are expanded to show internal nested structure. Our layout draws inner structures adjacent to their parents, using a modified tree layout to place substructures. We describe our algorithm and then present case studies demonstrating the layout's utility to a domain expert working on data flow analysis. Finally, we discuss network parameters and analysis situations in which our layout is well suited.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"hatch.on27@gmail.com","is_corresponding":true,"name":"Chang Han"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"lieffers@arizona.edu","is_corresponding":false,"name":"Justin Lieffers"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"claytonm@arizona.edu","is_corresponding":false,"name":"Clayton Morrison"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1156","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"An Overview + Detail Layout for Visualizing Compound Graphs","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1159","abstract":"With two studies, we assess how different walking trajectories (straight line, circular, and infinity) and speeds (2 km/h, 4 km/h, and 6 km/h) influence the accuracy and response time of participants reading micro visualizations on a smartwatch. We showed our participants common watch face micro visualizations including date, time, weather information, and four complications showing progress charts of fitness data. Our findings suggest that while walking trajectories did not significantly affect reading performance, overall walking activity, especially at high speeds, hurt reading accuracy and, to some extent, response time.","authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"fairouz.grioui@vis.uni-stuttgart.de","is_corresponding":true,"name":"Fairouz Grioui"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"research@blascheck.eu","is_corresponding":false,"name":"Tanja Blascheck"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"yaolijie0219@gmail.com","is_corresponding":false,"name":"Lijie Yao"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1159","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1161","abstract":"Digital twins are an excellent tool to model, visualize, and simulate complex systems, to understand and optimize their operation. In this work, we present the technical challenges of real-time visualization of a digital twin of the Frontier supercomputer. We show the initial prototype and current state of the twin and highlight technical design challenges of visualizing such a large High Performance Computing (HPC) system. The goal is to understand the use of augmented reality as a primary way to extract information and collaborate on digital twins of complex systems. This leverages the spatio-temporal aspect of a 3D representation of a digital twin, with the ability to view historical and real-time telemetry, triggering simulations of a system state and viewing the results, which can be augmented via dashboards for details. Finally, we discuss considerations and opportunities for augmented reality of digital twins of large-scale, parallel computers.","authors":[{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"maiterthm@ornl.gov","is_corresponding":true,"name":"Matthias Maiterth"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"brewerwh@ornl.gov","is_corresponding":false,"name":"Wes Brewer"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"dewetd@ornl.gov","is_corresponding":false,"name":"Dane De Wet"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"greenwoodms@ornl.gov","is_corresponding":false,"name":"Scott Greenwood"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kumarv@ornl.gov","is_corresponding":false,"name":"Vineet Kumar"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"hinesjr@ornl.gov","is_corresponding":false,"name":"Jesse Hines"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"bouknightsl@ornl.gov","is_corresponding":false,"name":"Sedrick L Bouknight"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"wangz@ornl.gov","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Hewlett Packard Enterprise, Berkshire, United Kingdom"],"email":"tim.dykes@hpe.com","is_corresponding":false,"name":"Tim Dykes"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"fwang2@ornl.gov","is_corresponding":false,"name":"Feiyi Wang"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1161","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1163","abstract":"Integral curves have been widely used to represent and analyze various vector fields. Curve-based clustering and pattern search approaches are usually applied to aid the identification of meaningful patterns from large numbers of integral curves. However, they need not support an interactive, level-of-detail exploration of these patterns. To address this, we propose a Curve Segment Neighborhood Graph (CSNG) to capture the relationships between neighboring curve segments. This graph representation enables us to adapt the fast community detection algorithm, i.e., the Louvain algorithm, to identify individual graph communities from CSNG. Our results show that these communities often correspond to the features of the flow. To achieve a multi-level interactive exploration of the detected communities, we adapt a force-directed layout that allows users to refine and re-group communities based on their domain knowledge. We incorporate the proposed techniques into an interactive system to enable effective analysis and interpretation of complex patterns in large-scale integral curve datasets.","authors":[{"affiliations":["University of Houston, Houston, United States"],"email":"nguyenpkk95@gmail.com","is_corresponding":true,"name":"Nguyen K Phan"},{"affiliations":["University of Houston, Houston, United States"],"email":"chengu@cs.uh.edu","is_corresponding":false,"name":"Guoning Chen"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1163","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Curve Segment Neighborhood-based Vector Field Exploration","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1166","abstract":"Custom animated visualizations of large, complex datasets are helpful across many domains, but they are hard to develop. Much of the difficulty arises from maintaining visualization state across a large set of animated graphical elements that may change in number over time. We contribute Counterpoint, a framework for state management designed to help implement such visualizations in JavaScript. Using Counterpoint, developers can manipulate large collections of marks with reactive attributes that are easy to render in scalable APIs such as Canvas and WebGL. Counterpoint also helps orchestrate the entry and exit of graphical elements using the concept of a rendering \"stage.\" Through a performance evaluation, we show that Counterpoint adds minimal overhead over current high-performance rendering techniques while simplifying implementation. We also provide two examples of visualizations created using Counterpoint that illustrate its flexibility and compatibility with other visualization toolkits as well as considerations for users with disabilities. Counterpoint is open-source and available at https://github.com/cmudig/counterpoint.","authors":[{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"vsivaram@andrew.cmu.edu","is_corresponding":true,"name":"Venkatesh Sivaraman"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"fje@cmu.edu","is_corresponding":false,"name":"Frank Elavsky"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"adamperer@cmu.edu","is_corresponding":false,"name":"Adam Perer"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1166","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1173","abstract":"Visualizing citation relations with network structures is widely used, but the visual complexity can make it challenging for individual researchers to navigate through them. We collected data from 18 researchers using an interface that we designed using network simplification methods and analyzed how users browsed and identified important papers. Our analysis reveals six major patterns used for identifying papers of interest, which can be categorized into three key components: Fields, Bridges, and Foundations, each viewed from two distinct perspectives: layout-oriented and connection-oriented. The connection-oriented approach was found to be more effective for selecting relevant papers, but the layout-oriented method was adopted more often, even though it led to unexpected results and user frustration. Our findings emphasize the importance of integrating these components and the necessity to balance visual layouts with meaningful connections to enhance the effectiveness of citation networks in academic browsing systems.","authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"krchoe@hcil.snu.ac.kr","is_corresponding":true,"name":"Kiroong Choe"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"gracekim027@snu.ac.kr","is_corresponding":false,"name":"Eunhye Kim"},{"affiliations":["Dept. of Electrical and Computer Engineering, SNU, Seoul, Korea, Republic of"],"email":"paulmoguri@snu.ac.kr","is_corresponding":false,"name":"Sangwon Park"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1173","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1177","abstract":"The proliferation of misleading visualizations online, particularly during critical events like public health crises and elections, poses a significant risk of misinformation. This work investigates the capability of GPT-4V to detect misleading visualizations. Utilizing a dataset of tweet-visualization pairs with various visual misleaders, we tested GPT-4V under four experimental conditions: naive zero-shot, naive few-shot, guided zero-shot, and guided few-shot. Our results demonstrate that GPT-4V can detect misleading visualizations with moderate accuracy without prior training (naive zero-shot) and that performance considerably improves by providing the model with the definitions of misleaders (guided zero-shot). However, combining definitions with examples of misleaders (guided few-shot) did not yield further improvements. This study underscores the feasibility of using large vision-language models such as GTP-4V to combat misinformation and emphasizes the importance of optimizing prompt engineering to enhance detection accuracy.","authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"jhalexander@umass.edu","is_corresponding":false,"name":"Jason Huang Alexander"},{"affiliations":["University of Masssachusetts Amherst, Amherst, United States"],"email":"phnanda@umass.edu","is_corresponding":false,"name":"Priyal H Nanda"},{"affiliations":["Northeastern University, Boston, United States"],"email":"yangkc@iu.edu","is_corresponding":false,"name":"Kai-Cheng Yang"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"asarv@cs.umass.edu","is_corresponding":true,"name":"Ali Sarvghad"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1177","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Can GPT-4V Detect Misleading Visualizations?","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1183","abstract":"An atmospheric front is an imaginary surface that separates two distinct air masses and is commonly defined as the warm-air side of a frontal zone with high gradients of atmospheric temperature and humidity. These fronts are a widely used conceptual model in meteorology, which are often encountered in the literature as two-dimensional (2D) front lines on surface analysis charts. This paper presents a method for computing three-dimensional (3D) atmospheric fronts as surfaces that is capable of extracting continuous and well-confined features suitable for 3D visual analysis, spatio-temporal tracking, and statistical analyses. Recently developed contour-based methods for 3D front extraction rely on computing the third derivative of a moist potential temperature field. Additionally, they require the field to be smoothed to obtain continuous large-scale structures. This paper demonstrates the feasibility of an alternative method to front extraction using ridge surface computation. The proposed method requires only the sec- ond derivative of the input field and produces accurate structures even from unsmoothed data. An application of the ridge-based method to a data set corresponding to Cyclone Friederike demonstrates its benefits and utility towards visual analysis of the full 3D structure of fronts.","authors":[{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"anne.gossing@fu-berlin.de","is_corresponding":true,"name":"Anne Gossing"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"andreas.beckert@uni-hamburg.de","is_corresponding":false,"name":"Andreas Beckert"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"christoph.fischer-1@uni-hamburg.de","is_corresponding":false,"name":"Christoph Fischer"},{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"klenert@zib.de","is_corresponding":false,"name":"Nicolas Klenert"},{"affiliations":["Indian Institute of Science, Bangalore, India"],"email":"vijayn@iisc.ac.in","is_corresponding":false,"name":"Vijay Natarajan"},{"affiliations":["Freie Universit\u00e4t Berlin, Berlin, Germany"],"email":"george.pacey@fu-berlin.de","is_corresponding":false,"name":"George Pacey"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"thorwin.vogt@uni-hamburg.de","is_corresponding":false,"name":"Thorwin Vogt"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"marc.rautenhaus@uni-hamburg.de","is_corresponding":false,"name":"Marc Rautenhaus"},{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"baum@zib.de","is_corresponding":false,"name":"Daniel Baum"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1183","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1184","abstract":"To improve the perception of hierarchical structures in data sets, several color map generation algorithms have been proposed to take this structure into account. But the design of hierarchical color maps elicits different requirements to those of color maps for tabular data. Within this paper, we make an initial effort to put design rules from the color map literature into the context of hierarchical color maps. We investigate the impact of several design decisions and provide recommendations for various analysis scenarios. Thus, we lay the foundation for objective quality criteria to evaluate hierarchical color maps.","authors":[{"affiliations":["Fraunhofer IGD, Darmstadt, Germany"],"email":"tobias.mertz@igd.fraunhofer.de","is_corresponding":true,"name":"Tobias Mertz"},{"affiliations":["Fraunhofer IGD, Darmstadt, Germany","TU Darmstadt, Darmstadt, Germany"],"email":"joern.kohlhammer@igd.fraunhofer.de","is_corresponding":false,"name":"J\u00f6rn Kohlhammer"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1184","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Towards a Quality Approach to Hierarchical Color Maps","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1185","abstract":"The visualization and interactive exploration of geo-referenced networks poses challenges if the network's nodes are not evenly distributed. Our approach proposes new ways of realizing animated transitions for exploring such networks from an ego-perspective. We aim to reduce the required screen estate while maintaining the viewers' mental map of distances and directions. A preliminary study provides first insights of the comprehensiveness of animated geographic transitions regarding directional relationships between start and end point in different projections. Two use cases showcase how ego-perspective graph exploration can be supported using less screen space than previous approaches.","authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"max@mumintroll.org","is_corresponding":true,"name":"Max Franke"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"samuel.beck@vis.uni-stuttgart.de","is_corresponding":false,"name":"Samuel Beck"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"steffen.koch@vis.uni-stuttgart.de","is_corresponding":false,"name":"Steffen Koch"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1185","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1186","abstract":"Data visualizations help extract insights from datasets, but reaching these insights requires decomposing high level goals into low-level analytic tasks that can be complex due to varying degrees of data literacy and visualization experience. Recent advancements in large language models (LLMs) have shown promise for lowering barriers for users to achieve tasks such as writing code and may likewise facilitate visualization insight. Scalable Vector Graphics (SVG), a text-based image format common in data visualizations, matches well with the text sequence processing of transformer-based LLMs. In this paper, we explore the capability of LLMs to perform 10 low-level visual analytic tasks defined by Amar, Eagan, and Stasko directly on SVG-based visualizations. Using zero-shot prompts, we instruct the models to provide responses or modify the SVG code based on given visualizations. Our findings demonstrate that LLMs can effectively modify existing SVG visualizations for some tasks like Cluster but perform poorly on tasks requiring mathematical operations like Compute Derived Value. We also discovered that LLM performance can vary based on factors such as the number of data points, the presence of value labels, and the chart type. Our findings contribute to gauging the general capabilities of LLMs and highlight the need for further exploration and development to fully harness their potential in supporting visual analytic tasks.","authors":[{"affiliations":["Brown University, Providence, United States"],"email":"leooooxzz@gmail.com","is_corresponding":true,"name":"Zhongzheng Xu"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1186","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1188","abstract":"Vortices and their analysis play a critical role in the understanding of complex phenomena in turbulent flow. Traditional vortex extraction methods, notably region-based techniques, often overlook the entanglement phenomenon, resulting in the inclusion of multiple vortices within a single extracted region. Their separation is necessary for quantifying different types of vortices and their statistics. In this study, we propose a novel vortex separation method that extends the conventional contour tree-based segmentation approach with an additional step termed \u201clayering\u201d. Upon extracting a vortical region using specified vortex criteria (e.g., \u03bb2), we initially establish topological segmentation based on the contour tree, followed by the layering process to allocate appropriate segmentation IDs to unsegmented cells, thus separating individual vortices within the region. However, these regions may still suffer from inaccurate splits, which we address statistically by leveraging the continuity of vorticity lines across the split boundaries. Our findings demonstrate a significant improvement in both the separation of vortices and the mitigation of inaccurate splits compared to prior methods.","authors":[{"affiliations":["University of Houston, Houston, United States"],"email":"adeelz92@gmail.com","is_corresponding":true,"name":"Adeel Zafar"},{"affiliations":["University of Houston, Houston, United States"],"email":"zpoorsha@cougarnet.uh.edu","is_corresponding":false,"name":"Zahra Poorshayegh"},{"affiliations":["University of Houston, Houston, United States"],"email":"diyang@uh.edu","is_corresponding":false,"name":"Di Yang"},{"affiliations":["University of Houston, Houston, United States"],"email":"chengu@cs.uh.edu","is_corresponding":false,"name":"Guoning Chen"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1188","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Topological Separation of Vortices","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1189","abstract":"The information visualization research community commonly produces supporting software to demonstrate technical contributions to the field. However, developing this software tends to be an overwhelming task, the final product tends to be a research prototype without much thought for modularization and re-usability which makes it harder to replicate and adopt. This paper presents a design pattern for facilitating the creation, dissemination, and re-utilization of visualization techniques using reactive widgets. The design pattern features basic concepts that leverage modern front-end development best practices and standards, which ease development and replication. The paper presents several usage examples of the pattern, templates for implementation, and even a wrapper for facilitating the conversion of any Vega specification into a reactive widget.","authors":[{"affiliations":["Northeastern University, San Francisco, United States"],"email":"john.guerra@gmail.com","is_corresponding":true,"name":"John Alexis Guerra-Gomez"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1189","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1191","abstract":"To enable data-driven decision-making across organizations, data professionals need to share insights with their colleagues in context-appropriate communication channels. Many of their colleagues rely on data but are not themselves analysts; furthermore, their colleagues are reluctant or unable to use dedicated analytical applications or dashboards, and they expect communication to take place within threaded collaboration platforms such as Slack or Microsoft Teams. In this paper, we introduce a set of six strategies for adapting content from business intelligence (BI) dashboards into appropriate formats for sharing on collaboration platforms, formats that we refer to as dashboard snapshots. Informed by prior studies of enterprise communication around data, these strategies go beyond redesigning or restyling by considering varying levels of data literacy across an organization, introducing affordances for self-service question-answering, and anticipating the post-sharing lifecycle of data artifacts. These strategies involve the use of templates that are matched to common communicative intents, serving to reduce the workload of data professionals. We contribute a formal representation of these strategies and demonstrate their applicability in a comprehensive enterprise communication scenario featuring multiple stakeholders that unfolds over the span of months.","authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"hyeokkim2024@u.northwestern.edu","is_corresponding":true,"name":"Hyeok Kim"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"arjun.srinivasan.10@gmail.com","is_corresponding":false,"name":"Arjun Srinivasan"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"mbrehmer@uwaterloo.ca","is_corresponding":false,"name":"Matthew Brehmer"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1191","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1192","abstract":"Narrative visualization has become a crucial tool in data presentation, merging storytelling with data visualization to convey complex information in an engaging and accessible manner. In this study, we review the design space for narrative visualizations, focusing on animation style, through a comprehensive analysis of 71 papers from key visualization venues. We categorize these papers into six broad themes: Animation Style, Interactivity, Technology Usage, Methodology Development, Evaluation Type, and Application Domain. Our findings reveal a significant evolution in the field, marked by a growing preference for animated and non-interactive techniques. This trend reflects a shift towards minimizing user interaction while enhancing the clarity and impact of data presentation. We also identified key trends and technologies that have shaped the field, highlighting the role of technologies, such as machine learning in driving these changes. We offer insights into the dynamic interrelations within the narrative visualization domain, suggesting a future research trajectory that balances interactivity with automated tools to foster increased engagement. Our work lays the groundwork for future approaches for effective and innovative narrative visualization in diverse applications.","authors":[{"affiliations":["Louisiana State University, Baton Rouge, United States"],"email":"jyang44@lsu.edu","is_corresponding":true,"name":"Vyri Junhan Yang"},{"affiliations":["Louisiana State University, Baton Rouge, United States"],"email":"mjasim@lsu.edu","is_corresponding":false,"name":"Mahmood Jasim"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1192","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Animating the Narrative: A Review of Animation Styles in Narrative Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1193","abstract":"We present LinkQ, a system that leverages a large language model (LLM) to facilitate knowledge graph (KG) query construction through natural language question-answering. Traditional approaches often require detailed knowledge of complex graph querying languages, limiting the ability for users -- even experts -- to acquire valuable insights from KG data. LinkQ simplifies this process by first interpreting a user's question, then converting it into a well-formed KG query. By using the LLM to construct a query instead of directly answering the user's question, LinkQ guards against the LLM hallucinating or generating false, erroneous information. By integrating an LLM into LinkQ, users are able to conduct both exploratory and confirmatory data analysis, with the LLM helping to iteratively refine open-ended questions into precise ones. To demonstrate the efficacy of LinkQ, we conducted a qualitative study with five KG practitioners and distill their feedback. Our results indicate that practitioners find LinkQ effective for KG question-answering, and desire future LLM-assisted systems for the exploratory analysis of graph databases.","authors":[{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"harry.li@ll.mit.edu","is_corresponding":true,"name":"Harry Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"ashley.suh@ll.mit.edu","is_corresponding":false,"name":"Ashley Suh"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1193","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1199","abstract":"In the digital landscape, the ubiquity of data visualizations in media underscores the necessity for accessibility to ensure inclusivity for all users, including those with visual impairments. Current visual content often fails to cater to the needs of screen reader users due to the absence of comprehensive textual descriptions. To address this gap, we propose in this paper a framework designed to empower media content creators to transform charts into descriptive narratives. This tool not only facilitates the understanding of complex visual data through text but also fosters a broader awareness of accessibility in digital content creation. Through the application of this framework, users can interpret and convey the insights of data visualizations more effectively, accommodating a diverse audience. Our evaluations reveal that this tool not only enhances the comprehension of data visualizations but also promotes new perspectives on the represented data, thereby broadening the interpretative possibilities for all users.","authors":[{"affiliations":["Polytechnique Montr\u00e9al, Montr\u00e9al, Canada"],"email":"qiangxu1204@gmail.com","is_corresponding":true,"name":"Qiang Xu"},{"affiliations":["Polytechnique Montreal, Montreal, Canada"],"email":"thomas.hurtut@polymtl.ca","is_corresponding":false,"name":"Thomas Hurtut"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1199","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1207","abstract":"An essential task of an air traffic controller is to manage the traffic flow by predicting future trajectories. Complex traffic patterns are difficult to predict and manage and impose cognitive load on the air traffic controllers. In this work we present an interactive visual analytics interface which facilitates detection and resolution of complex traffic patterns for air traffic controllers. The interface supports the users in detecting complex clusters of aircraft and uses visual representations to communicate to the controllers how and propose re-routing. The interface further enables the ATCos to visualize and simultaneously compare how different re-routing strategies for each individual aircraft yield reduction of complexity in the entire sector for the next hour. The development of the concepts was supported by the domain-specific feedback we received from six fully licensed and operational air traffic controllers in an iterative design process over a period of 14 months.","authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"elmira.zohrevandi@liu.se","is_corresponding":true,"name":"Elmira Zohrevandi"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"katerina.vrotsou@liu.se","is_corresponding":false,"name":"Katerina Vrotsou"},{"affiliations":["Institute of Science and Technology, Norrk\u00f6ping, Sweden","Institute of Science and Technology, Norrk\u00f6ping, Sweden"],"email":"carl.westin@liu.se","is_corresponding":false,"name":"Carl A. L. Westin"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"jonas.lundberg@liu.se","is_corresponding":false,"name":"Jonas Lundberg"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"anders.ynnerman@liu.se","is_corresponding":false,"name":"Anders Ynnerman"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1207","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1211","abstract":"Transfer function design is crucial in volume rendering, as it directly influences the visual representation and interpretation of volumetric data. However, creating effective transfer functions that align with users\u2019 visual objectives is often challenging due to the complex parameter space and the semantic gap between transfer function values and features of interest within the volume. In this work, we propose a novel approach that leverages recent advancements in language-vision models to bridge this semantic gap. By employing a fully differentiable rendering pipeline and an image-based loss function guided by language descriptions, our method generates transfer functions that yield volume-rendered images closely matching the user\u2019s intent. We demonstrate the effectiveness of our approach in creating meaningful transfer functions from simple descriptions, empowering users to intuitively express their desired visual outcomes with minimal effort. This advancement streamlines the transfer function design process and makes volume rendering more accessible to a broader range of users.","authors":[{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"sangwon.jeong@vanderbilt.edu","is_corresponding":true,"name":"Sangwon Jeong"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["Lawrence Livermore National Laboratory , Livermore, United States"],"email":"shusenl@sci.utah.edu","is_corresponding":false,"name":"Shusen Liu"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"matthew.berger@vanderbilt.edu","is_corresponding":false,"name":"Matthew Berger"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1211","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Text-based transfer function design for semantic volume rendering","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1224","abstract":"Diffusion-based generative models\u2019 impressive ability to create convincing images has garnered global attention. However, their complex structures and operations often pose challenges for non-experts to grasp. We present Diffusion Explainer, the first interactive visualization tool that explains how Stable Diffusion transforms text prompts into images. Diffusion Explainer tightly integrates a visual overview of Stable Diffusion\u2019s complex structure with explanations of the underlying operations. By comparing image generation of prompt variants, users can discover the impact of keyword changes on image generation. A 56-participant user study demonstrates that Diffusion Explainer offers substantial learning benefits to non-experts. Our tool has been used by over 10,300 users from 124 countries at https://poloclub.github.io/diffusion-explainer/.","authors":[{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"seongmin@gatech.edu","is_corresponding":true,"name":"Seongmin Lee"},{"affiliations":["GA Tech, Atlanta, United States","IBM Research AI, Cambridge, United States"],"email":"benjamin.hoover@ibm.com","is_corresponding":false,"name":"Benjamin Hoover"},{"affiliations":["IBM Research AI, Cambridge, United States"],"email":"hendrik@strobelt.com","is_corresponding":false,"name":"Hendrik Strobelt"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"jayw@gatech.edu","is_corresponding":false,"name":"Zijie J. Wang"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"speng65@gatech.edu","is_corresponding":false,"name":"ShengYun Peng"},{"affiliations":["Georgia Institute of Technology , Atlanta , United States"],"email":"apwright@gatech.edu","is_corresponding":false,"name":"Austin P Wright"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"kevin.li@gatech.edu","is_corresponding":false,"name":"Kevin Li"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"haekyu@gatech.edu","is_corresponding":false,"name":"Haekyu Park"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"alexanderyang@gatech.edu","is_corresponding":false,"name":"Haoyang Yang"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"polo@gatech.edu","is_corresponding":false,"name":"Duen Horng (Polo) Chau"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1224","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1235","abstract":"A high number of samples often leads to occlusion in scatterplots, which hinders data perception and analysis. De-cluttering approaches based on spatial transformation reduce visual clutter by remapping samples using the entire available scatterplot domain. Such regularized scatterplots may still be used for data analysis tasks, if the spatial transformation is smooth and preserves the original neighborhood relations of samples. Recently, Rave et al. proposed an efficient regularization method based on integral images. We propose a generalization of their regularization scheme using sector-based transformations with the aim of increasing sample uniformity of the resulting scatterplot. We document the improvement of our approach using various uniformity measures.","authors":[{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"hennes.rave@uni-muenster.de","is_corresponding":true,"name":"Hennes Rave"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"molchano@uni-muenster.de","is_corresponding":false,"name":"Vladimir Molchanov"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"linsen@uni-muenster.de","is_corresponding":false,"name":"Lars Linsen"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1235","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Uniform Sample Distribution in Scatterplots via Sector-based Transformation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1236","abstract":"Automatically generating data visualizations in response to human utterances on datasets necessitates a deep semantic understanding of the data utterance, including implicit and explicit references to data attributes, visualization tasks, and necessary data preparation steps. Natural Language Interfaces (NLIs) for data visualization have explored ways to infer such information, yet challenges persist due to inherent uncertainty in human speech. Recent advances in Large Language Models (LLMs) provide an avenue to address these challenges, but their ability to extract the relevant semantic information remains unexplored. In this study, we evaluate four publicly available LLMs (GPT-4, Gemini-Pro, Llama3, and Mixtral), investigating their ability to comprehend utterances even in the presence of uncertainty and identify the relevant data context and visual tasks. Our findings reveal that LLMs are sensitive to uncertainties in utterance. Despite this sensitivity, they are able to extract the relevant data context. However, LLMs struggle with inferring visualization tasks. Based on these results, we highlight future research directions on using LLMs for visualization generation. Our supplementary materials have been shared on OSF: https://osf.io/j342a/wiki/home/?view_only=b4051ffc6253496d9bce818e4a89b9f9","authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"hbako@umd.edu","is_corresponding":true,"name":"Hannah K. Bako"},{"affiliations":["University of Maryland, College Park, United States"],"email":"arshnoorbhutani8@gmail.com","is_corresponding":false,"name":"Arshnoor Bhutani"},{"affiliations":["The University of Texas at Austin, Austin, United States"],"email":"xinyi.liu@utexas.edu","is_corresponding":false,"name":"Xinyi Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"kcobbina@cs.umd.edu","is_corresponding":false,"name":"Kwesi Adu Cobbina"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1236","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1248","abstract":"Statistical practices such as building regression models or running hypothesis tests rely on following rigorous procedures of steps and verifying assumptions on data to produce valid results. However, common statistical tools do not verify users\u2019 decision choices and provide low-level statistical functions without instructions on the whole analysis practice. Users can easily misuse analysis methods, potentially decreasing the validity of results. To address this problem, we introduce GuidedStats, an interactive interface within computational notebooks that encapsulates guidance, models, visualization, and exportable results into interactive workflows. It breaks down typical analysis processes, such as linear regression and two-sample T-tests, into interactive steps supplemented with automatic visualizations and explanations for step-wise evaluation. Users can iterate on input choices to refine their models, while recommended actions and exports allow the user to continue their analysis in code. Case studies show how GuidedStats offers valuable instructions for conducting fluid statistical analyses while finding possible assumption violations in the underlying data, supporting flexible and accurate statistical analyses.","authors":[{"affiliations":["New York University, New York, United States"],"email":"yz9381@nyu.edu","is_corresponding":true,"name":"Yuqi Zhang"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"adamperer@cmu.edu","is_corresponding":false,"name":"Adam Perer"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"willepp@cmu.edu","is_corresponding":false,"name":"Will Epperson"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1248","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Guided Statistical Workflows with Interactive Explanations and Assumption Checking","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1264","abstract":"The Local Moran's I statistic is a valuable tool for identifying localized patterns of spatial autocorrelation. Understanding these patterns is crucial in spatial analysis, but interpreting the statistic can be difficult. To simplify this process, we introduce three novel visualizations that enhance the interpretation of Local Moran's I results. These visualizations can be interactively linked to one another, and to established visualizations, to offer a more holistic exploration of the results. We provide a JavaScript library with implementations of these new visual elements, along with a web dashboard that demonstrates their integrated use.","authors":[{"affiliations":["NIH, Rockville, United States","Queen's University, Belfast, United Kingdom"],"email":"masonlk@nih.gov","is_corresponding":true,"name":"Lee Mason"},{"affiliations":["Queen's University Belfast , Belfast , United Kingdom"],"email":"b.hicks@qub.ac.uk","is_corresponding":false,"name":"Bl\u00e1naid Hicks"},{"affiliations":["National Institutes of Health, Rockville, United States"],"email":"jonas.dealmeida@nih.gov","is_corresponding":false,"name":"Jonas S Almeida"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1264","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1274","abstract":"This study examines the impact of positive and negative contrast polarities (i.e., light and dark modes) on the performance of younger adults and people in their late adulthood (PLA). In a crowdsourced study with 134 participants (69 below age 60, 66 aged 60 and above), we assessed their accuracy and time performing analysis tasks across three common visualization types (Bar, Line, Scatterplot) and two contrast polarities (positive and negative). We observed that, across both age groups, the polarity that led to better performance and the resulting amount of improvement varied on an individual basis, with each polarity benefiting comparable proportions of participants. Additionally, we observed that the choice of contrast polarity can have an impact on time similar to that of the choice of visualization type, resulting in an average percent difference of around 36%. These findings indicate that, overall, the effects of contrast polarity on visual analysis performance do not noticeably change with age. Furthermore, they underscore the importance of making visualizations available in both contrast polarities to better-support a broad audience with differing needs.","authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"zwhile@cs.umass.edu","is_corresponding":true,"name":"Zack While"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"asarv@cs.umass.edu","is_corresponding":false,"name":"Ali Sarvghad"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1274","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1276","abstract":"Machine Learning models for chart-grounded Q&A (CQA) often treat charts as images, but performing CQA on pixel values has proven challenging. We thus investigate a resource overlooked by current ML-based approaches: the declarative documents describing how charts should visually encode data (i.e., chart specifications). In this work, we use chart specifications to enhance language models (LMs) for chart-reading tasks, such that the resulting system can robustly understand language for CQA. Through a case study with 359 bar charts, we test novel fine tuning schemes on both GPT-3 and T5 using a new dataset curated for two CQA tasks: question-answering and visual explanation generation. Our text-only approaches strongly outperform vision-based GPT-4 on explanation generation (99% vs. 63% accuracy), and show promising results for question-answering (57-67% accuracy). Through in-depth experiments, we also show that our text-only approaches are mostly robust to natural language variation.","authors":[{"affiliations":["Adobe Research, San Jose, United States"],"email":"victorbursztyn2022@u.northwestern.edu","is_corresponding":true,"name":"Victor S. Bursztyn"},{"affiliations":["Adobe Research, Seattle, United States"],"email":"jhoffs@adobe.com","is_corresponding":false,"name":"Jane Hoffswell"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"sguo@adobe.com","is_corresponding":false,"name":"Shunan Guo"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"eunyee@adobe.com","is_corresponding":false,"name":"Eunyee Koh"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1276","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1277","abstract":"Trust is a subjective yet fundamental component of human-computer interaction, and is a determining factor in shaping the efficacy of data visualizations. Prior research has identified five dimensions of trust assessment in visualizations (credibility, clarity, reliability, familiarity, and confidence), and observed that these dimensions tend to vary predictably along with certain features of the visualization being evaluated. This raises a further question: how do the design features driving viewers' trust assessment vary with the characteristics of the viewers themselves? By reanalyzing data from these studies through the lens of individual differences, we build a more detailed map of the relationships between design features, individual characteristics, and trust behaviors. In particular, we model the distinct contributions of endogenous design features (such as visualization type, or the use of color) and exogenous user characteristics (such as visualization literacy), as well as the interactions between them. We then use these findings to make recommendations for individualized and adaptive visualization design.","authors":[{"affiliations":["Smith College, Northampton, United States"],"email":"jcrouser@smith.edu","is_corresponding":true,"name":"R. Jordan Crouser"},{"affiliations":["Smith College, Northampton, United States"],"email":"cmatoussi@smith.edu","is_corresponding":false,"name":"Syrine Matoussi"},{"affiliations":["Smith College, Northampton, United States"],"email":"ekung@smith.edu","is_corresponding":false,"name":"Lan Kung"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"p.saugat@wustl.edu","is_corresponding":false,"name":"Saugat Pandey"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"m.oen@wustl.edu","is_corresponding":false,"name":"Oen G McKinley"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"alvitta@wustl.edu","is_corresponding":false,"name":"Alvitta Ottley"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1277","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1285","abstract":"This study examines the impact of social-comparison risk visualizations on public health communication, comparing the effects of traditional bar charts against alternative jitter plots emphasizing geographic variability (geo jitter). The research highlights that whereas both visualization types increased perceived vulnerability, behavioral intent, and policy support, the geo jitter plots were significantly more effective in reducing unjustified personal attributions. Importantly, the findings also underscore the emotional challenges faced by visualization viewers from marginalized communities, indicating a need for designs that are sensitive to the potential for reinforcing stereotypes or eliciting negative emotions. This work suggests a strategic reevaluation of visual communication tools in public health to enhance understanding and engagement without contributing to negative attributions or emotional distress.","authors":[{"affiliations":["3iap, Raleigh, United States"],"email":"eli@3iap.com","is_corresponding":false,"name":"Eli Holder"},{"affiliations":["Northeastern University, Boston, United States","University of California Merced, Merced, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":true,"name":"Lace M. Padilla"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1285","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"\"Must Be a Tuesday\": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1292","abstract":"Collaborative planning for congenital heart diseases typically involves creating physical heart models through 3D printing, which are then examined by both surgeons and cardiologists. Recent developments in mobile augmented reality (AR) technologies have presented a viable alternative, known for their ease of use and portability. However, there is still a lack of research examining the utilization of multi-user mobile AR environments to support collaborative planning for cardiovascular surgeries. We created ARCollab, an iOS AR app designed for enabling multiple surgeons and cardiologists to interact with a patient's 3D heart model in a shared environment. ARCollab enables surgeons and cardiologists to import heart models, manipulate them through gestures and collaborate with other users, eliminating the need for fabricating physical heart models. Our evaluation of ARCollab's usability and usefulness in enhancing collaboration, conducted with three cardiothoracic surgeons and two cardiologists, marks the first human evaluation of a multi-user mobile AR tool for surgical planning. ARCollab is open-source, available at https://github.com/poloclub/arcollab.","authors":[{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"pratham.mehta001@gmail.com","is_corresponding":true,"name":"Pratham Darrpan Mehta"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"rnarayanan39@gatech.edu","is_corresponding":false,"name":"Rahul Ozhur Narayanan"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"harsha5431@gmail.com","is_corresponding":false,"name":"Harsha Karanth"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"alexanderyang@gatech.edu","is_corresponding":false,"name":"Haoyang Yang"},{"affiliations":["Emory University, Atlanta, United States"],"email":"slesnickt@kidsheart.com","is_corresponding":false,"name":"Timothy C Slesnick"},{"affiliations":["Emory University/Children's Healthcare of Atlanta, Atlanta, United States"],"email":"fawwaz.shaw@choa.org","is_corresponding":false,"name":"Fawwaz Shaw"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"polo@gatech.edu","is_corresponding":false,"name":"Duen Horng (Polo) Chau"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1292","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-short-1301","abstract":"Reactionary delay'' is a result of the accumulated cascading effects of knock-on train delays. It is becoming an increasing problem as shared railway infrastructure becomes more crowded. The chaotic nature of its effects is notoriously hard to predict. We use a stochastic Monte-Carto-style simulation of reactionary delay that produces whole distributions of likely reactionary delay. Our contribution is the demonstrating how Zoomable GlyphTables -- case-by-variable tables in which cases are rows, variables are columns, variables are complex composite metrics that incorporate distributions, and cells contain mini-charts that depict these as different level of detail through zoom interaction -- help interpret these results for helping understanding the causes and effects of reactionary delay and how they have been informing timetable robustness testing and tweaking. We describe our design principles, demonstrate how this supported our analytical tasks and we reflect on wider potential for Zoomable GlyphTables to be used more widely.","authors":[{"affiliations":["City, University of London, London, United Kingdom"],"email":"a.slingsby@city.ac.uk","is_corresponding":true,"name":"Aidan Slingsby"},{"affiliations":["Risk Solutions, Warrington, United Kingdom"],"email":"jonathan.hyde@risksol.co.uk","is_corresponding":false,"name":"Jonathan Hyde"}],"award":"","doi":"","event_id":"v-short","event_title":"VIS Short Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-short-1301","image_caption":"","keywords":[],"paper_type":"short","paper_type_color":"#FDBB30","paper_type_name":"VIS Short Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"short0","session_room":"None","session_title":"Short Papers","session_uid":"v-short","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Short Papers"],"time_stamp":"","title":"Zoomable Glyph Tables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1026","abstract":"We present a visual analytics approach for multi-level visual exploration of users\u2019 interaction strategies in an interactive digital environment. The use of interactive touchscreen exhibits in informal learning environments, such as museums and science centers, often incorporate frameworks that classify learning processes, such as Bloom\u2019s taxonomy, to achieve better user engagement and knowledge transfer. To analyze user behavior within these digital environments, interaction logs are recorded to capture diverse exploration strategies. However, analysis of such logs is challenging, especially in terms of coupling interactions and cognitive learning processes, and existing work within learning and educational contexts remains limited. To address these gaps, we develop a visual analytics approach for analyzing interaction logs that supports exploration at the individual user level and multi-user comparison. The approach utilizes algorithmic methods to identify similarities in users' interactions and reveal their exploration strategies. We motivate and illustrate our approach through an application scenario, using event sequences derived from interaction log data in an experimental study conducted with science center visitors from diverse backgrounds and demographics. The study involves 14 users completing tasks of increasing complexity, designed to stimulate different levels of cognitive learning processes. We implement our approach in an interactive visual analytics prototype system, named VISID, and together with domain experts, discover a set of task-solving exploration strategies, such as \"cascading\" and \"nested-loop\", which reflect different levels of learning processes from Bloom's taxonomy. Finally, we discuss the generalizability and scalability of the presented system and the need for further research with data acquired in the wild.","authors":[{"affiliations":["Media and Information Technology, Norrk\u00f6ping, Sweden"],"email":"peilin.yu@liu.se","is_corresponding":true,"name":"Peilin Yu"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"aida.vitoria@liu.se","is_corresponding":false,"name":"Aida Nordman"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"marta.koc-januchta@liu.se","is_corresponding":false,"name":"Marta M. Koc-Januchta"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":false,"name":"Lonni Besan\u00e7on"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"katerina.vrotsou@liu.se","is_corresponding":false,"name":"Katerina Vrotsou"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1026","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1031","abstract":"In soccer, player scouting aims to find players suitable for a team to increase the winning chance in future matches. To scout suitable players, coaches and analysts need to consider various complicated factors, such as the players' performance in the tactics of a new team, which is hard to learn directly from their historical performance. Match simulation methods have been introduced to scout players by estimating their expected contributions to a new team. However, they usually focus on the simulation of match results and hardly support interactive analysis to navigate potential target players and compare them in fine-grained simulated behaviors. In this work, we propose a visual analytics method to assist soccer player scouting based on match simulation. We construct a two-level match simulation framework for estimating both match results and player behaviors when a player comes to a new team. Based on the framework, we develop a visual analytics system, Team-Scouter, to facilitate the simulative-based soccer player scouting process through player navigation, comparison, and explanation. With our system, coaches and analysts can find potential players suitable for the team and compare them on historical and expected performances. To explain the players' expected performances, the system provides a visual comparison between the simulated behaviors of the player and the actual ones. The usefulness and effectiveness of the system are demonstrated by two case studies on a real-world dataset and an expert interview.","authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"caoanqi28@163.com","is_corresponding":true,"name":"Anqi Cao"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"xxie@zju.edu.cn","is_corresponding":false,"name":"Xiao Xie"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"2366385033@qq.com","is_corresponding":false,"name":"Runjin Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"1282533692@qq.com","is_corresponding":false,"name":"Yuxin Tian"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"fanmu_032@zju.edu.cn","is_corresponding":false,"name":"Mu Fan"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhang_hui@zju.edu.cn","is_corresponding":false,"name":"Hui Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1031","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1032","abstract":"Dynamic topic modeling is useful at discovering the development and change in latent topics over time. However, present methodology relies on algorithms that separate document and word representations. This prevents the creation of a meaningful embedding space where changes in word usage and documents can be directly analyzed in a temporal context. This paper proposes an expansion of the compass-aligned temporal Word2Vec methodology into dynamic topic modeling. Such a method allows for the direct comparison of word and document embeddings across time in dynamic topics. This enables the creation of visualizations that incorporate diachronic word embeddings within the context of documents into topic visualizations. In experiments against the current state-of-the-art, our proposed method demonstrates overall competitive performance in topic relevancy and diversity across temporal datasets of varying size. Simultaneously, it provides insightful visualizations focused on temporal word embeddings while maintaining the insights provided by global topic evolution, advancing our understanding of how topics evolve over time.","authors":[{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"d4n1elp@vt.edu","is_corresponding":true,"name":"Daniel Palamarchuk"},{"affiliations":["Virginia Polytechnic Institute of Technology , Blacksburg, United States"],"email":"lemaraw@vt.edu","is_corresponding":false,"name":"Lemara Williams"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"bmayer@cs.vt.edu","is_corresponding":false,"name":"Brian Mayer"},{"affiliations":["Savannah River National Laboratory, Aiken, United States"],"email":"thomas.danielson@srnl.doe.gov","is_corresponding":false,"name":"Thomas Danielson"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":false,"name":"Rebecca Faust"},{"affiliations":["Savannah River National Laboratory, Aiken, United States"],"email":"larry.deschaine@srnl.doe.gov","is_corresponding":false,"name":"Larry M Deschaine PhD"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1032","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Visualizing Temporal Topic Embeddings with a Compass","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1039","abstract":"Propagation analysis refers to studying how information spreads on social media, a pivotal endeavor for understanding social sentiment and public opinions. Numerous studies contribute to visualizing information spread, but few have considered the implicit and complex diffusion patterns among multiple platforms. To bridge the gap, we collaborated with professionals to discover crucial factors that dissect the mechanism of cross-platform information spread. Based on that, we propose an information diffusion model that estimates the likelihood of a topic/post spreading among different social media platforms. Moreover, we propose a novel visual metaphor that encapsulates cross-platform patterns in a manner analogous to the spread of seeds across gardens. Specifically, we visualize social platforms, posts, implicit cross-platform routes, and salient instances as elements of a virtual ecosystem \u2014 gardens, flowers, winds, and seeds, respectively. We further develop a visual analytic system, namely BloomWind, that enables users to quickly identify the cross-platform diffusion patterns and investigate the relevant social media posts. Ultimately, we demonstrate the usage of BloomWind through two case studies and validate its effectiveness using expert interviews.","authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"940662579@qq.com","is_corresponding":true,"name":"Jianing Yin"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"hzjia@zju.edu.cn","is_corresponding":false,"name":"Hanze Jia"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhoubuwei@zju.edu.cn","is_corresponding":false,"name":"Buwei Zhou"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"tangtan@zju.edu.cn","is_corresponding":false,"name":"Tan Tang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"yingluu@zju.edu.cn","is_corresponding":false,"name":"Lu Ying"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"sn_ye@zju.edu.cn","is_corresponding":false,"name":"Shuainan Ye"},{"affiliations":["Michigan State University, East Lansing, United States"],"email":"pengtaiq@msu.edu","is_corresponding":false,"name":"Tai-Quan Peng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1039","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1059","abstract":"When treating Head and Neck cancer patients, oncologists have to navigate a complicated series of treatment decisions for each patient. The relationship between each treatment decision and the potential tradeoff of tumor control and toxicity risk is poorly understood, leaving oncologists to largely rely on institutional knowledge and general guidelines that do not take into account specific patient circumstances. Evaluating these risks relies on a complicated understanding of several different factors such as patient health, spatial tumor spread and treatment side effect risk that can not be captured through simple heuristics. To support clinicians in better understanding tradeoffs when deciding on treatment courses, we developed DITTO, a digital-twin and visual computing system that allows clinicians to analyze nuanced patient risk for each patient and decide on an optimal treatment plan. DITTO relies on a sequential Deep Reinforcement Learning (DRL) system to deliver personalized risk of both long-term and short-term disease outcome and toxicity risk for HNC patients. Based on a participatory collaborative design alongside oncologists, we also implement several explainability methods to support clinical trust and encourage healthy skepticism when using our models. We evaluate the efficacy of our model through quantitative evaluation of model performance and case studies with qualitative feedback. Finally, we discuss design lessons for developing clinical visual XAI applications for clinical end users.","authors":[{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"awentze2@uic.edu","is_corresponding":true,"name":"Andrew Wentzel"},{"affiliations":["University of Houston, Houston, United States"],"email":"skattia@mdanderson.org","is_corresponding":false,"name":"Serageldin Attia"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"zhangz@uic.edu","is_corresponding":false,"name":"Xinhua Zhang"},{"affiliations":["University of Iowa, Iowa City, United States"],"email":"guadalupe-canahuate@uiowa.edu","is_corresponding":false,"name":"Guadalupe Canahuate"},{"affiliations":["University of Texas, Houston, United States"],"email":"cdfuller@mdanderson.org","is_corresponding":false,"name":"Clifton David Fuller"},{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"g.elisabeta.marai@gmail.com","is_corresponding":false,"name":"G. Elisabeta Marai"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1059","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1060","abstract":"There is increased interest in understanding the interplay between text and visuals in the field of data visualization. However, this attention has predominantly been on the use of text in standalone visualizations (such as text annotation overlays) or augmenting text stories supported by a series of independent views. In this paper, we shift from the traditional focus on single-chart annotations to characterize the nuanced but crucial communication role of text in the complex environment of interactive dashboards. Through a survey and analysis of 190 dashboards in the wild, plus 13 expert interview sessions with experienced dashboard authors, we highlight the distinctive nature of text as an integral component of the dashboard experience, while delving into the categories, semantic levels, and functional roles of text, and exploring how these text elements are coalesced by dashboard authors to guide and inform dashboard users. Our contributions are threefold. First, we distill qualitative and quantitative findings from our studies to characterize current practices of text use in dashboards, including a categorization of text-based components and design patterns. Second, we leverage current practices and existing literature to propose, discuss, and validate recommended practices for text in dashboards, embodied as a set of 12 heuristics that underscore the semantic and functional role of text in offering navigational cues, contextualizing data insights, supporting reading order, among other concerns. Third, we reflect on our findings plus existing literature to identify gaps and propose opportunities for data visualization researchers to push the boundaries on text usage for dashboards, from authoring support and interactivity to text generation and content personalization. Our research underscores the significance of elevating text as a first-class citizen in data visualization, and the need to support the inclusion of textual components and their interactive affordances in dashboard design.","authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"nicole.sultanum@gmail.com","is_corresponding":true,"name":"Nicole Sultanum"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1060","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1063","abstract":"While previous work has found success in deploying visualizations as museum exhibits, it has not investigated whether museum context impacts visitor behaviour with these exhibits. We present an interactive Deep-time Literacy Visualization Exhibit (DeLVE) to help museum visitors understand deep time (lengths of extremely long geological processes) by improving proportional reasoning skills through comparison of different time periods. DeLVE uses a new visualization idiom, Connected Multi-Tier Ranges, to visualize curated datasets of past events across multiple scales of time, relating extreme scales with concrete scales that have more familiar magnitudes and units. Museum staff at three separate museums approved the deployment of DeLVE as a digital kiosk, and devoted time to curating a unique dataset in each of them. We collect data from two sources, an observational study and system trace logs. We discuss the importance of context: similar museum exhibits in different contexts were received very differently by visitors. We additionally discuss differences in our process from Sedlmair et al.'s design study methodology which is focused on design studies triggered by connection with collaborators rather than the discovery of a concept to communicate. Supplemental materials are available at: https://osf.io/z53dq/?view_only=4df33aad207144aca149982412125541","authors":[{"affiliations":["The University of British Columbia, Vancouver, Canada"],"email":"marasolen@gmail.com","is_corresponding":true,"name":"Mara Solen"},{"affiliations":["University of British Columbia , Vancouver, Canada"],"email":"sultananigar70@gmail.com","is_corresponding":false,"name":"Nigar Sultana"},{"affiliations":["University of British Columbia, Vancouver, Canada"],"email":"laura.lukes@ubc.ca","is_corresponding":false,"name":"Laura A. Lukes"},{"affiliations":["University of British Columbia, Vancouver, Canada"],"email":"tmm@cs.ubc.ca","is_corresponding":false,"name":"Tamara Munzner"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1063","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DeLVE into Earth\u2019s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1067","abstract":"Large Language Models (LLMs), such as ChatGPT and Llama, have revolutionized various domains through their impressive natural language processing capabilities. However, their deployment raises significant ethical and security concerns, including their potential misuse for generating fake news or aiding illegal activities. Thus, ensuring the development of secure and trustworthy LLMs is crucial. Traditional red teaming approaches for identifying vulnerabilities in AI models are limited by their reliance on manual prompt construction and expertise. This paper introduces a novel visual analytics system, AdversaFlow, designed to enhance the security of LLMs against adversarial attacks through human-AI collaboration. Our system, which involves adversarial training between a target model and a red model, is equipped with a unique multi-level adversarial flow visualization and a fluctuation path visualization technique. These features provide a detailed insight into the adversarial dynamics and the robustness of LLMs, thereby enabling AI security experts to identify and mitigate vulnerabilities effectively. We deliver quantitative evaluations for the models and present case studies that validate the utility of our system and share insights for future AI security solutions. Our contributions include a human-AI collaboration framework for LLM red teaming, a comprehensive visual analytics system to support adversarial pattern presentation and fluctuation analysis, and valuable lessons learned in visual analytics for AI security.","authors":[{"affiliations":["Zhejiang University, Ningbo, China"],"email":"dengdazhen@outlook.com","is_corresponding":true,"name":"Dazhen Deng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhangchuhan024@163.com","is_corresponding":false,"name":"Chuhan Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"huawzheng@gmail.com","is_corresponding":false,"name":"Huawei Zheng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"yw.pu@zju.edu.cn","is_corresponding":false,"name":"Yuwen Pu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"sji@zju.edu.cn","is_corresponding":false,"name":"Shouling Ji"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1067","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1077","abstract":"A growing body of work draws on feminist thinking to challenge assumptions about how people engage with and use visualizations. This work draws on feminist values, driving design and research guidelines that account for the influences of power and neglect. This prior work is largely prescriptive, however, forgoing articulation of how feminist theories of knowledge \u2014 or feminist epistemology \u2014 can alter research design and outcomes. At the core of our work is an engagement with feminist epistemology, drawing attention to how a new framework for how we know what we know enabled us to overcome intellectual tensions in our research. Specifically, we focus on the theoretical concept of entanglement, central to recent feminist scholarship, and contribute: a history of entanglement in the broader scope of feminist theory; an articulation of the main points of entanglement theory for a visualization context; and a case study of research outcomes as evidence of the potential of feminist epistemology to impact visualization research. This work answers a call in the community to embrace a broader set of theoretical and epistemic foundations and provides a starting point for bringing different theories into visualization research.","authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"derya.akbaba@liu.se","is_corresponding":true,"name":"Derya Akbaba"},{"affiliations":["Emory University, Atlanta, United States"],"email":"lauren.klein@emory.edu","is_corresponding":false,"name":"Lauren Klein"},{"affiliations":["Link\u00f6ping University, N\u00f6rrkoping, Sweden"],"email":"miriah.meyer@liu.se","is_corresponding":false,"name":"Miriah Meyer"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1077","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Entanglements for Visualization: Changing Research Outcomes through Feminist Theory","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1096","abstract":"Large Language Models (LLMs) have shown great potential in intelligent visualization systems, especially for domain-specific applications. Integrating LLMs into visualization systems presents challenges, and we categorize these challenges into three alignments: domain problems with LLMs, visualization with LLMs, and interaction with LLMs. To achieve these alignments, we propose a framework and outline a workflow to guide the application of fine-tuned LLMs to enhance visual interactions for domain-specific tasks. These alignment challenges are critical in education as they call for an intelligent visualization system to support beginners' self-regulated learning. Therefore, we apply the framework to education and introduce Tailor-Mind, an interactive visualization system designed to facilitate self-regulated learning for artificial intelligence beginners. Drawing on insights from a preliminary study, we identify self-regulated learning tasks and fine-tuning objectives to guide visualization design and tuning data construction. Our focus on aligning visualization with fine-tuned LLM makes Tailor-Mind more like a personalized tutor. Tailor-Mind also supports interactive recommendations to help beginners better achieve their learning goals. Model performance evaluations and user studies confirm that Tailor-Mind improves the self-regulated learning experience, effectively validating the proposed framework.","authors":[{"affiliations":["Fudan University, Shanghai, China"],"email":"lgao.lynne@gmail.com","is_corresponding":true,"name":"Lin Gao"},{"affiliations":["Fudan University, ShangHai, China"],"email":"kingluther6666@gmail.com","is_corresponding":false,"name":"Jing Lu"},{"affiliations":["Fudan University, Shanghai, China"],"email":"gemini25szk@gmail.com","is_corresponding":false,"name":"Zekai Shao"},{"affiliations":["Fudan University, Shanghai, China"],"email":"ziyuelin917@gmail.com","is_corresponding":false,"name":"Ziyue Lin"},{"affiliations":["Fudan unversity, ShangHai, China"],"email":"sbyue23@m.fudan.edu.cn","is_corresponding":false,"name":"Shengbin Yue"},{"affiliations":["Fudan University, Shanghai, China"],"email":"chiokit0819@gmail.com","is_corresponding":false,"name":"Chiokit Ieong"},{"affiliations":["Fudan University, Shanghai, China"],"email":"21307130094@m.fudan.edu.cn","is_corresponding":false,"name":"Yi Sun"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"rory.james.zauner@univie.ac.at","is_corresponding":false,"name":"Rory Zauner"},{"affiliations":["Fudan University, Shanghai, China"],"email":"zywei@fudan.edu.cn","is_corresponding":false,"name":"Zhongyu Wei"},{"affiliations":["Fudan University, Shanghai, China"],"email":"simingchen3@gmail.com","is_corresponding":false,"name":"Siming Chen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1096","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1099","abstract":"Tactics play an important role in team sports by guiding how players interact on the field. Both sports fans and experts have a demand for analyzing sports tactics. Existing approaches allow users to visually perceive the multivariate tactical effects. However, these approaches usually consider each tactic as a whole, making it difficult for users to connect the complex interactions inside each tactic to the final tactical effect. In this work, we collaborate with basketball experts and propose a progressive approach to help users gain a deeper understanding of how each tactic works and customize tactics on demand. Users can progressively sketch on a tactic board, and a coach agent will simulate the possible actions in each step and present the simulation to users with facet visualizations. We develop an extensible framework that integrates large language models (LLMs) and visualizations to help users communicate with the coach agent with multimodal inputs. Based on the framework, we design and develop Smartboard, an agent-based interactive visualization system for fine-grained tactical analysis. Smartboard provides users with a structured process of setup, simulation, and evolution, allowing for iterative exploration of tactics based on specific personalized scenarios. We conduct case studies based on real-world basketball datasets to demonstrate the usefulness of our system.","authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ziao_liu@outlook.com","is_corresponding":true,"name":"Ziao Liu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"xxie@zju.edu.cn","is_corresponding":false,"name":"Xiao Xie"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"3170101799@zju.edu.cn","is_corresponding":false,"name":"Moqi He"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhao_ws@zju.edu.cn","is_corresponding":false,"name":"Wenshuo Zhao"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"wuyihong0606@gmail.com","is_corresponding":false,"name":"Yihong Wu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"lycheecheng@zju.edu.cn","is_corresponding":false,"name":"Liqi Cheng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhang_hui@zju.edu.cn","is_corresponding":false,"name":"Hui Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1099","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Smartboard: Visual Exploration of Team Tactics with LLM Agent","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1100","abstract":"\u201cCorrelation does not imply causation\u201d is a famous mantra in statistical and visual analysis. However, consumers of visualizations often draw causal conclusions when only correlations between variables are shown. In this paper, we investigate factors that contribute to causal relationships users perceive in visualizations. We collected a corpus of concept pairs from variables in widely used datasets and created visualizations that depict varying correlative associations using three typical statistical chart types. We conducted two MTurk studies on (1) preconceived notions on causal relations without charts, and (2) perceived causal relations with charts, for each concept pair. Our results indicate that people make assumptions about causal relationships between pairs of concepts even without seeing any visualized data. Moreover, our results suggest that these assumptions constitute causal priors that, in combination with chart type and visualized association, impact how data visualizations are interpreted. The results also suggest that causal priors may lead to over- or under-estimation in perceived causal relations in different circumstances, and that those priors can also impact users\u2019 confidence in their causal assessments. Using data from the studies, we develop a model to capture the interaction between causal priors and visualized associations as they combine to impact a user\u2019s perceived causal relations. In addition to reporting the study results and analyses, we provide an open dataset of causal priors for 56 specific concept pairs that can serve as a potential benchmark for future studies. We also suggest heuristic-based guidelines to help designers improve visualization design choices to better support visual causal inference.","authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":true,"name":"Arran Zeyu Wang"},{"affiliations":["UNC-Chapel Hill, Chapel Hill, United States"],"email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":["Davidson College, Davidson, United States"],"email":"tapeck@davidson.edu","is_corresponding":false,"name":"Tabitha C. Peck"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"vaapad@live.unc.edu","is_corresponding":false,"name":"Wenyuan Wang"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1100","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Causal Priors and Their Influence on Judgements of Causality in Visualized Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1121","abstract":"Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.","authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jykim@hcil.snu.ac.kr","is_corresponding":true,"name":"Jaeyoung Kim"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"sihyeon@hcil.snu.ac.kr","is_corresponding":false,"name":"Sihyeon Lee"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"hj@hcil.snu.ac.kr","is_corresponding":false,"name":"Hyeon Jeon"},{"affiliations":["Korea University Guro Hospital, Seoul, Korea, Republic of"],"email":"gooday19@gmail.com","is_corresponding":false,"name":"Keon-Joo Lee"},{"affiliations":["Hankuk University of Foreign Studies, Yongin-si, Korea, Republic of"],"email":"bkim@hufs.ac.kr","is_corresponding":false,"name":"Bohyoung Kim"},{"affiliations":["Seoul National University Bundang Hospital, Seongnam, Korea, Republic of"],"email":"braindoc@snu.ac.kr","is_corresponding":false,"name":"HEE JOON"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1121","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1128","abstract":"Citations allow quickly identifying related research. If multiple publications are selected as seeds, specific suggestions for related literature can be made based on the number of incoming and outgoing citation links to this selection. Interactively adding recommended publications to the selection refines the next suggestion and incrementally builds a relevant collection of publications. Following this approach, the paper presents a search and foraging approach, PUREsuggest, which combines citation-based suggestions with augmented visualizations of the citation network. The focus and novelty of the approach is, first, the transparency of how the rankings are explained visually and, second, that the process can be steered through user-defined keywords, which reflect topics of interests. The system can be used to build new literature collections, to update and assess existing ones, as well as to use the collected literature for identifying relevant experts in the field. We evaluated the recommendation approach through simulated sessions and performed a user study investigating search strategies and usage patterns supported by the interface.","authors":[{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"fabian.beck@uni-bamberg.de","is_corresponding":true,"name":"Fabian Beck"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1128","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1137","abstract":"Inspired by recent advances in digital fabrication, artists and scientists have demonstrated that physical data encodings (i.e., data physicalizations) can increase engagement with data, foster collaboration, and in some cases, improve data legibility and analysis relative to digital alternatives. However, prior empirical studies have only investigated abstract data encoded in physical form (e.g., laser cut bar charts) and not continuously sampled spatial data fields relevant to climate and medical science (e.g., heights, temperatures, densities, and velocities sampled on a spatial grid). This paper presents the design and results of the first study to characterize human performance in 3D spatial data analysis tasks across analogous physical and digital visualizations. Participants analyzed continuous spatial elevation data with three visualization modalities: (1) 2D digital visualization; (2) perspective-tracked, stereoscopic \"fishtank\" virtual reality; and (3) 3D printed data physicalization. Their tasks included tracing paths downhill, looking up spatial locations and comparing their relative heights, and identifying and reporting the minimum and maximum heights within certain spatial regions. As hypothesized, in most cases, participants performed the tasks just as well or better in the physical modality (based on time and error metrics). Additional results include an analysis of open-ended feedback from participants and discussion of implications for further research on the value of data physicalization. All data and supplemental materials are available at https://osf.io/7xdq4/?view_only=7416f8cfca85473889456fb69527abbc","authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"bridger.g.herman@gmail.com","is_corresponding":true,"name":"Bridger Herman"},{"affiliations":["Beth Israel Deaconess Medical Center, Boston, United States"],"email":"cdjackso@bidmc.harvard.edu","is_corresponding":false,"name":"Cullen D. Jackson"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"dfk@umn.edu","is_corresponding":false,"name":"Daniel F. Keefe"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1137","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1140","abstract":"Written language is a useful mode for non-visual creative activities like writing essays and planning searches. This paper investigates the integration of written language into the visualization design process. We call this idea a `written rudder,' , since it acts as a guiding force or strategy for the design. Via an interview study of 24 working visualization designers, we first established that only a minority of participants systematically use written rudders to aid in design. A second study with 15 visualization designers examined four different variants of rudders: asking questions, stating conclusions, composing a narrative, and writing titles. Overall, participants had a positive reaction; designers recognized the benefits of explicitly writing down components of the design and indicated that they would use this approach in future design work. More specifically, two approaches \u2013- writing questions and writing conclusions/takeaways \u2013- were seen as beneficial across the design process, while writing narratives showed promise mainly for the creation stage. Although concerns around potential bias during data exploration were raised, participants also discussed strategies to mitigate such concerns. This paper contributes to a deeper understanding of the interplay between language and visualization, and proposes a straightforward, lightweight addition to the visualization design process.","authors":[{"affiliations":["UC Berkeley, Berkeley, United States"],"email":"chase_stokes@berkeley.edu","is_corresponding":true,"name":"Chase Stokes"},{"affiliations":["Self, Berkeley, United States"],"email":"clarahu@berkeley.edu","is_corresponding":false,"name":"Clara Hu"},{"affiliations":["UC Berkeley, Berkeley, United States"],"email":"hearst@berkeley.edu","is_corresponding":false,"name":"Marti Hearst"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1140","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"It's a Good Idea to Put It Into Words: Writing 'Rudders' in the Initial Stages of Visualization Design","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1142","abstract":"To deploy machine learning (ML) models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many compression experiments, identifying subtle changes in model behavior, and negotiating complex accuracy-efficiency trade-offs. However, existing compression tools poorly support comparison, leading to tedious and, sometimes, incomplete analyses spread across disjoint tools. To support real-world comparative workflows, we develop an interactive visual system called Compress & Compare. Within a single interface, Compress & Compare surfaces promising compression strategies by visualizing provenance relationships between compressed models and reveals compression-induced behavior changes by comparing models' predictions, weights, and activations. We demonstrate how Compress & Compare supports common compression analysis tasks through two case studies\u2014debugging failed compression on generative language models and identifying compression-induced biases in image classification. We further evaluate Compress & Compare in a user study with eight compression experts, illustrating its potential to provide structure to compression workflows, help practitioners build intuition about compression, and encourage thorough analysis of compression\u2019s effect on model behavior. Through these evaluations, we identify compression-specific challenges that future visual analytics tools should consider and Compress & Compare visualizations that may generalize to broader model comparison tasks.","authors":[{"affiliations":["Massachusetts Institute of Technology, Cambridge, United States"],"email":"aboggust@mit.edu","is_corresponding":true,"name":"Angie Boggust"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"vsivaram@andrew.cmu.edu","is_corresponding":false,"name":"Venkatesh Sivaraman"},{"affiliations":["Apple, Cambridge, United States"],"email":"yassogba@gmail.com","is_corresponding":false,"name":"Yannick Assogba"},{"affiliations":["Apple, Seattle, United States"],"email":"donghao@apple.com","is_corresponding":false,"name":"Donghao Ren"},{"affiliations":["Apple, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["Apple, Seattle, United States"],"email":"fred.hohman@gmail.com","is_corresponding":false,"name":"Fred Hohman"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1142","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1147","abstract":"Large Language Models (LLMs) like GPT-4 which support multimodal input (i.e., prompts containing images in addition to text) have immense potential to advance visualization research. However, many questions exist about the visual capabilities of such models, including how well they can read and interpret visually represented data. In our work, we address this question by evaluating the GPT-4 multimodal LLM using a suite of task sets meant to assess the model\u2019s visualization literacy. The task sets are based on existing work in the visualization community addressing both automated chart question answering and human visualization literacy across multiple settings. Our assessment finds that GPT-4 can perform tasks such as recognizing trends and extreme values, and also demonstrates some understanding of visualization design best-practices. By contrast, GPT-4 struggles with simple value retrieval when not provided with the original dataset, lacks the ability to reliably distinguish between colors in charts, and occasionally suffers from hallucination and inconsistency. We conclude by reflecting on the model\u2019s strengths and weaknesses as well as the potential utility of models like GPT-4 for future visualization research. We also release all code, stimuli, and results for the task sets at the following link: (REDACTED FOR REVIEW)","authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"abendeck3@gatech.edu","is_corresponding":true,"name":"Alexander Bendeck"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"john.stasko@cc.gatech.edu","is_corresponding":false,"name":"John Stasko"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1147","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1150","abstract":"Composite visualization represents a widely embraced design that combines multiple visual representations to create an integrated view. However, the traditional approach of creating composite visualizations in immersive environments typically occurs asynchronously outside of the immersive space and is carried out by experienced experts. In this work, we take the first step to empower users to participate in the creation of composite visualization within immersive environments through embodied interactions. This could provide a flexible and fluid experience for data exploration and facilitate a deep understanding of the relationship between data visualizations. We begin with forming a design space of embodied interactions to create various types of composite visualizations with the consideration of data relationships. Drawing inspiration from people's natural experience of manipulating physical objects, we design interactions to directly assemble composite visualizations in immersive environments. Building upon the design space, we present a series of case studies showcasing the interactive method to create different kinds of composite visualizations in Virtual Reality (VR). Subsequently, we conduct a user study to evaluate the usability of the derived interaction techniques and user experience of embodiedly creating composite visualizations. We find that empowering users to participate in composite visualizations through embodied interactions enables them to flexibly leverage different visualization representations for understanding and communicating the relationships between different views, which underscores the potential for a set of application scenarios in the future.","authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China","The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"qzhual@connect.ust.hk","is_corresponding":true,"name":"Qian Zhu"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States","Georgia Institute of Technology, Atlanta, United States"],"email":"luttul@umich.edu","is_corresponding":false,"name":"Tao Lu"},{"affiliations":["Adobe Research, San Jose, United States","Adobe Research, San Jose, United States"],"email":"sguo@adobe.com","is_corresponding":false,"name":"Shunan Guo"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong","Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"mxj@cse.ust.hk","is_corresponding":false,"name":"Xiaojuan Ma"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States","Georgia Institute of Technology, Atlanta, United States"],"email":"yalongyang@hotmail.com","is_corresponding":false,"name":"Yalong Yang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1150","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1153","abstract":"Points of interest on a map such as restaurants, hotels, or subway stations, give rise to categorical point data: data that have a fixed location and one or more categorical attributes. Consequently, recent years have seen various set visualization approaches that visually connect points of the same category to support users in understanding the spatial distribution of categories. Existing methods use complex and often highly irregular shapes to connect points of the same category, leading to high cognitive load for the user. In this paper we introduce SimpleSets that use simple shapes to enclose categorical point patterns and provide a low-complexity overview of the data distribution. We give formal definitions of point patterns that correspond to simple shapes and describe an algorithm that partitions categorical points into few such patterns. Our second contribution is a rendering algorithm that transforms a given partition into a clean set of shapes resulting in an aesthetically pleasing set visualization. Our algorithm pays particular attention to resolving intersections between nearby shapes in a consistent manner. We compare SimpleSets to the state-of-the-art set visualizations using standard datasets from the literature. SimpleSets are designed to visualize disjoint categories, however, we discuss avenues to extend our technique to overlapping set systems.","authors":[{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"s.w.v.d.broek@tue.nl","is_corresponding":true,"name":"Steven van den Broek"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"w.meulemans@tue.nl","is_corresponding":false,"name":"Wouter Meulemans"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"b.speckmann@tue.nl","is_corresponding":false,"name":"Bettina Speckmann"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1153","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"SimpleSets: Capturing Categorical Point Patterns with Simple Shapes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1155","abstract":"Interactive visualizations are powerful tools for Exploratory Data Analysis (EDA), but how do they affect the observations analysts make about their data? We conducted a qualitative experiment with 13 professional data scientists analyzing two datasets within Jupyter notebooks, collecting a rich dataset of interaction traces and think-aloud utterances. By qualitatively analyzing participant verbalizations, we introduce the concept of \"observation-analysis states.\" These states capture both the dataset characteristics a participant focuses on and the insights they express. Our definition reveals that interactive visualizations on average lead to earlier and more complex insights about relationships between dataset attributes compared to static visualizations. Moreover, this process identified new measures for studying representation use in notebooks such as hover time, revisiting rate and representational diversity. In particular, revisiting rates revealed behavior where analysts revisit particular representations throughout the time course of an analysis, serving more as navigational aids through an EDA than as strict hypothesis answering tools. We show how these measures helped identify other patterns of analysis behavior, such as the \"80-20 rule\", where a small subset of representations drove the majority of observations. Based on these findings, we offer design guidelines for interactive exploratory analysis tooling and reflect on future directions for studying the role that visualizations play in EDA.","authors":[{"affiliations":["MIT, Cambridge, United States"],"email":"dwootton@mit.edu","is_corresponding":true,"name":"Dylan Wootton"},{"affiliations":["MIT, Cambridge, United States"],"email":"amyraefoxphd@gmail.com","is_corresponding":false,"name":"Amy Rae Fox"},{"affiliations":["University of Colorado Boulder, Boulder, United States"],"email":"evan.peck@colorado.edu","is_corresponding":false,"name":"Evan Peck"},{"affiliations":["MIT, Cambridge, United States"],"email":"arvindsatya@mit.edu","is_corresponding":false,"name":"Arvind Satyanarayan"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1155","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1179","abstract":"Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful tools for solving complex optimization problems characterized by multiple, often conflicting, objectives. While advancements have been made in computational efficiency as well as diversity and convergence of solutions, a critical challenge persists: the internal evolutionary mechanisms are opaque to human users. Drawing upon the successes of explainable AI in explaining complex algorithms and models, we argue that the need to understand the underlying evolutionary operators and population dynamics in MOEAs aligns well with a visual analytics paradigm. This paper introduces ParetoTracker, a visual analytics framework designed to support the comprehension and inspection of population dynamics in the evolutionary processes of MOEAs. Informed by preliminary literature review and expert interviews, the framework establishes a multi-level analysis scheme, which caters to user engagement and exploration ranging from examining overall trends in performance metrics to conducting fine-grained inspections of evolutionary operations. In contrast to conventional practices that require manual plotting of solutions for each generation, ParetoTracker facilitates the examination of temporal trends and dynamics across consecutive generations in an integrated visual interface. The effectiveness of the framework is demonstrated through case studies and expert interviews focused on widely adopted benchmark optimization problems.","authors":[{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"zhangzr32021@mail.sustech.edu.cn","is_corresponding":false,"name":"Zherui Zhang"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"yangf2020@mail.sustech.edu.cn","is_corresponding":false,"name":"Fan Yang"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"ranchengcn@gmail.com","is_corresponding":false,"name":"Ran Cheng"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"mayx@sustech.edu.cn","is_corresponding":true,"name":"Yuxin Ma"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1179","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1185","abstract":"This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who are unfamiliar with these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn unfamiliar network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then mines the underlying data patterns, and eventually explains both visual and data patterns present in the viewer\u2019s selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to only textual and only visual (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology.","authors":[{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":true,"name":"Xinhuan Shu"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"alexis.pister@hotmail.com","is_corresponding":false,"name":"Alexis Pister"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"tangjunxiu@zju.edu.cn","is_corresponding":false,"name":"Junxiu Tang"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":false,"name":"Fanny Chevalier"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1185","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1193","abstract":"Emerging multimodal large language models (MLLMs) exhibit great potential for chart question answering (CQA). Recent efforts primarily focus on scaling up training datasets (\\ie, charts, data tables, and question-answer (QA) pairs) through data collection and synthesis. However, our empirical study on existing MLLMs and CQA datasets reveals notable gaps. First, current data collection and synthesis focus on data volume and lack consideration of fine-grained visual encodings and QA tasks, resulting in unbalanced data distribution divergent from practical CQA scenarios. Second, existing work follows the training recipe of the base MLLMs initially designed for natural images, under-exploring the adaptation to unique chart characteristics, such as rich text elements. To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. Specifically, we propose a novel data engine to effectively filter diverse and high-quality data from existing datasets and subsequently refine and augment the data using LLM-based generation techniques to better align with practical QA tasks and visual encodings. Then, to facilitate the adaptation to chart characteristics, we utilize the enriched data to train an MLLM by unfreezing the vision encoder and incorporating a mixture-of-resolution adaptation strategy for enhanced fine-grained recognition. Experimental results validate the effectiveness of our approach. Even with fewer training examples, our model consistently outperforms state-of-the-art CQA models on established benchmarks. We also contribute a dataset split as a benchmark for future research.","authors":[{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"xingchen.zeng@outlook.com","is_corresponding":true,"name":"Xingchen Zeng"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"hlin386@connect.hkust-gz.edu.cn","is_corresponding":false,"name":"Haichuan Lin"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"yyebd@connect.ust.hk","is_corresponding":false,"name":"Yilin Ye"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China","The Hong Kong University of Science and Technology, Hong Kong SAR, China"],"email":"weizeng@hkust-gz.edu.cn","is_corresponding":false,"name":"Wei Zeng"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1193","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1202","abstract":"The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and visual data analysis tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with several variables including personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual's personality traits.","authors":[{"affiliations":["Emory University, Atlanta, United States"],"email":"mengyu.chen@emory.edu","is_corresponding":true,"name":"Mengyu Chen"},{"affiliations":["Emory University, Atlanta, United States"],"email":"yijun.liu2@emory.edu","is_corresponding":false,"name":"Yijun Liu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1202","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1204","abstract":"We present ProvenanceWidgets, a Javascript library of UI control elements such as radio buttons, checkboxes, and dropdowns to track and dynamically overlay a user's analytic provenance. These in situ overlays not only save screen space but also minimize the amount of time and effort needed to access the same information from elsewhere in the UI. In this paper, we discuss how we design modular UI control elements to track how often and how recently a user interacts with them and design visual overlays showing an aggregated summary as well as a detailed temporal history. We demonstrate the capability of ProvenanceWidgets by recreating three prior widget libraries: (1) Scented Widgets, (2) Phosphor objects, and (3) Dynamic Query Widgets. We also evaluated its expressiveness and conducted case studies with visualization developers to evaluate its effectiveness. We find that ProvenanceWidgets enables developers to implement custom provenance-tracking applications effectively. ProvenanceWidgets is available as open-source software at https://github.com/ProvenanceWidgets to help application developers build custom provenance-based systems.","authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"arpitnarechania@gatech.edu","is_corresponding":true,"name":"Arpit Narechania"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"kaustubhodak1@gmail.com","is_corresponding":false,"name":"Kaustubh Odak"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"melassady@ai.ethz.ch","is_corresponding":false,"name":"Mennatallah El-Assady"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"endert@gatech.edu","is_corresponding":false,"name":"Alex Endert"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1204","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1214","abstract":"Graphs are often used to model relationships between entities. The identification and visualization of clusters in graphs enable insight discovery in many application areas, such as life sciences and social sciences. Force-directed graph layout algorithms promote the visual saliency of clusters, as they generally bring adjacent nodes closer together, and push non-adjacent nodes apart. In this work, we study the impact of node ordering on the visual saliency of clusters in orderable node-link diagrams, namely radial diagrams, arc diagrams and symmetric arc diagrams. Through a crowdsourced controlled experiment, we show that users can count clusters consistently more accurately, and to a large extent faster, with orderable node-link diagrams than with three state-of-the art force-directed layout algorithms, i.e., `Linlog', `Backbone' and, `sfdp'. The measured advantage is greater in case of low cluster separability and/or low compactness. A free copy of this paper and all supplemental materials are available at https://osf.io/kc3dg/?view_only=892f7b96752e40a6baefb2e50e866f9d","authors":[{"affiliations":["Luxembourg Institute of Science and Technology, Esch-sur-Alzette, Luxembourg"],"email":"nora.alnaami@list.lu","is_corresponding":false,"name":"Nora Al-Naami"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"nicolas.medoc@list.lu","is_corresponding":false,"name":"Nicolas Medoc"},{"affiliations":["Uppsala University, Uppsala, Sweden"],"email":"matteo.magnani@it.uu.se","is_corresponding":false,"name":"Matteo Magnani"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"mohammad.ghoniem@list.lu","is_corresponding":true,"name":"Mohammad Ghoniem"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1214","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1218","abstract":"Placing text labels is a common way to explain key elements in a given scene. Given a graphic input and original label information, how to place labels to meet both geometric and aesthetic requirements is an open challenging problem. Geometry-wise, traditional rule-driven solutions struggle to capture the complex interactions between labels, let alone consider graphical/appearance content. In terms of aesthetics, training/evaluation data ideally require nontrivial effort and expertise in design, thus resulting in a lack of decent datasets for learning-based methods. To address the above challenges, we formulate the task with a graph representation, where nodes correspond to labels and edges to the between-label interactions, and treat label placement as a node position prediction problem. With this novel representation, we design a Label Placement Graph Transformer (LPGT) to predict label positions. Specifically, edge-level attention, conditioned on node representations, is introduced to reveal potential relationships between labels. To integrate graphic/image information, we design a feature aligning strategy that extracts deep features for nodes and edges efficiently. Next, to address the dataset issue, we collect commercial illustrations with professionally designed label layouts from household appliance manuals, and annotate them with useful information to create a novel dataset named the Appliance Manual Illustration Labels (AMIL) dataset. In the thorough evaluation on AMIL, our LPGT solution achieves promising label placement performance compared with popular baselines.","authors":[{"affiliations":["Southwest University, Beibei, China"],"email":"qujingwei@swu.edu.cn","is_corresponding":true,"name":"Jingwei Qu"},{"affiliations":["Southwest University, Chongqing, China"],"email":"z2211973606@email.swu.edu.cn","is_corresponding":false,"name":"Pingshun Zhang"},{"affiliations":["Southwest University, Beibei, China"],"email":"enyuche@gmail.com","is_corresponding":false,"name":"Enyu Che"},{"affiliations":["COLLEGE OF COMPUTER AND INFORMATION SCIENCE, SOUTHWEST UNIVERSITY SCHOOL OF SOFTWAREC, Chongqin, China"],"email":"out1147205215@outlook.com","is_corresponding":false,"name":"Yinan Chen"},{"affiliations":["Stony Brook University, New York, United States"],"email":"hling@cs.stonybrook.edu","is_corresponding":false,"name":"Haibin Ling"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1218","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Graph Transformer for Label Placement","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1232","abstract":"How do cancer cells grow, divide, proliferate and die? How do drugs influence these processes? These are difficult questions that we can attempt to answer with a combination of time-series microscopy experiments, classification algorithms, and data visualization. However, collecting this type of data and applying algorithms to segment and track cells and construct lineages of proliferation is error-prone; and identifying the errors can be challenging since it often requires cross-checking multiple data types. Similarly, analyzing and communicating the results necessitates synthesizing different data types into a single narrative. State-of-the-art visualization methods for such data use independent line charts, tree diagrams, and images in separate views. However, this spatial separation requires the viewer of these charts to combine the relevant pieces of data in memory. To simplify this challenging task, we describe design principles for weaving cell images, time-series data, and tree data into a cohesive visualization. Our design principles are based on choosing a primary data type that drives the layout and integrates the other data types into that layout. We then introduce Aardvark, a system that uses these principles to implement novel visualization techniques. Based on Aardvark, we demonstrate the utility of each of these approaches for discovery, communication, and data debugging in a series of case studies.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"devin@sci.utah.edu","is_corresponding":true,"name":"Devin Lange"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"robert.judson-torres@hci.utah.edu","is_corresponding":false,"name":"Robert L Judson-Torres"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"tzangle@chemeng.utah.edu","is_corresponding":false,"name":"Thomas A Zangle"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"alex@sci.utah.edu","is_corresponding":false,"name":"Alexander Lex"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1232","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Aardvark: Composite Visualizations of Trees, Time-Series, and Images","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1251","abstract":"Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks that lead to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook history, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only improves the reproducibility of notebooks, but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate utility and potential impact of our approach in two use cases and feedback from notebook users from a range of backgrounds.","authors":[{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"klaus@eckelt.info","is_corresponding":true,"name":"Klaus Eckelt"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"kirangadhave2@gmail.com","is_corresponding":false,"name":"Kiran Gadhave"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"alex@sci.utah.edu","is_corresponding":false,"name":"Alexander Lex"},{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"marc.streit@jku.at","is_corresponding":false,"name":"Marc Streit"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1251","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1256","abstract":"People commonly utilize visualizations not only to examine a given dataset, but also to draw generalizable conclusions about the underlying models or phenomena. Previous research has compared human visual inference to that of an optimal Bayesian agent, with deviations from rational analysis viewed as problematic. However, human reliance on non-normative heuristics may prove advantageous in certain circumstances. We investigate scenarios where human intuition might surpass idealized statistical rationality. In two experiments, we examine individuals' accuracy in characterizing the parameters of known data-generating models from bivariate visualizations. Our findings indicate that, although participants generally exhibited lower accuracy compared to statistical models, they frequently outperformed Bayesian agents, particularly when faced with extreme samples. Participants appeared to rely on their internal models to filter out noisy visualizations, thus improving their resilience against spurious data. However, participants displayed overconfidence and struggled with uncertainty estimation. They also exhibited higher variance than statistical machines. Our findings suggest that analyst gut reactions to visualizations may provide an advantage, even when departing from rationality. These results carry implications for designing visual analytics tools, offering new perspectives on how to integrate statistical models and analyst intuition for improved inference and decision-making.","authors":[{"affiliations":["Indiana University, Indianapolis, United States"],"email":"rkoonch@iu.edu","is_corresponding":true,"name":"Ratanond Koonchanok"},{"affiliations":["Argonne National Laboratory, Lemont, United States","University of Illinois Chicago, Chicago, United States"],"email":"papka@anl.gov","is_corresponding":false,"name":"Michael E. Papka"},{"affiliations":["Indiana University, Indianapolis, United States"],"email":"redak@iu.edu","is_corresponding":false,"name":"Khairi Reda"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1256","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1258","abstract":"Providing effective guidance for users has long been an important and challenging task for efficient exploratory visual analytics, especially when selecting variables for visualization in high-dimensional datasets. Correlation is the most widely applied metric for guidance in statistical and analytical tools, however a reliance on correlation may lead users towards false positives when interpreting causal relations in the data. In this work, inspired by prior insights on the benefits of counterfactual visualization in supporting visual causal inference, we propose a novel, simple, and efficient counterfactual guidance method to enhance causal inference performance in guided exploratory analytics based on insights and concerns gathered from expert interviews. Our technique aims to capitalize on the benefits of counterfactual approaches while reducing their complexity for users. We integrated counterfactual guidance into an exploratory visual analytics system, and using a synthetically generated ground-truth causal dataset, conducted a comparative user study and evaluated to what extent counterfactual guidance can help lead users to more precise visual causal inferences. The results suggest that counterfactual guidance improved visual causal inference performance, and also led to different exploratory behaviors compared to correlation-based guidance. Based on these findings, we offer future directions to incorporate and examine counterfactual guidance to better support exploratory visual analytics.","authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":true,"name":"Arran Zeyu Wang"},{"affiliations":["UNC-Chapel Hill, Chapel Hill, United States"],"email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1258","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1272","abstract":"In various scientific and industrial domains, analyzing multivariate spatial data, i.e., vectors associated with spatial locations, is common practice. To analyze those datasets, analysts may turn to models such as Spatial Blind Source Separation (SBSS). Designed explicitly for spatial data analysis, SBSS finds latent components in the dataset and is superior to popular non-spatial models, like PCA. However, when analysts try different tuning parameter settings, the amount of latent components complicates analytical tasks. Based on our years-long collaboration with SBSS researchers, we propose a visualization approach to tackle this challenge. The main component is UnDRground Tubes (UT), a general-purpose idiom combining ideas from set visualization and multidimensional projections. We describe the UT visualization pipeline and integrate UT into an interactive multiple-view system. We demonstrate its effectiveness through interviews with SBSS experts, a qualitative evaluation with visualization experts, and computational experiments. SBSS experts were excited about our approach. They saw many benefits for their work and potential applications for geostatistical data analysis more generally. UT was also very well received by visualization experts. Our benchmarks show that UT projections and its heuristics are appropriate.","authors":[{"affiliations":["TU Wien, Vienna, Austria"],"email":"nikolaus.piccolotto@tuwien.ac.at","is_corresponding":true,"name":"Nikolaus Piccolotto"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"mwallinger@ac.tuwien.ac.at","is_corresponding":false,"name":"Markus Wallinger"},{"affiliations":["Institute of Visual Computing and Human-Centered Technology, Vienna, Austria"],"email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"markus.boegl@tuwien.ac.at","is_corresponding":false,"name":"Markus B\u00f6gl"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1272","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1275","abstract":"We developed and validated an instrument to measure the perceived readability in data visualization: PREVis. Researchers and practitioners can easily use this instrument as part of their evaluations to compare the perceived readability of different visual data representations. Our instrument can complement results from controlled experiments on user task performance or provide additional data during in-depth qualitative work such as design iterations when developing a new technique. Although readability is recognized as an essential quality of data visualizations, so far there has not been a unified definition of the construct in the context of visual representations. As a result, researchers often lack guidance for determining how to ask people to rate their perceived readability of a visualization. To address this issue, we engaged in a rigorous process to develop the first validated instrument targeted at the subjective readability of visual data representations. Our final instrument consists of 11 items across 4 dimensions: understandability, layout clarity, readability of data values, and readability of data patterns. We provide the questionnaire as a document with implementation guidelines on osf.io/9cg8j. Beyond this instrument, we contribute a discussion of how researchers have previously assessed visualization readability, and an analysis of the factors underlying perceived readability in visual data representations.","authors":[{"affiliations":["LISN, Universit\u00e9 Paris Saclay, CNRS, Orsay, France","Aviz, Inria, Saclay, France"],"email":"acabouat@gmail.com","is_corresponding":true,"name":"Anne-Flore Cabouat"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tingying.he@inria.fr","is_corresponding":false,"name":"Tingying He"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1275","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"PREVis: Perceived Readability Evaluation for Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1277","abstract":"This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.","authors":[{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":true,"name":"Tushar M. Athawale"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"wangz@ornl.gov","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kmorel@acm.org","is_corresponding":false,"name":"Kenneth Moreland"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"gongq@ornl.gov","is_corresponding":false,"name":"Qian Gong"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"klasky@ornl.gov","is_corresponding":false,"name":"Scott Klasky"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"paul.rosen@utah.edu","is_corresponding":false,"name":"Paul Rosen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1277","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1281","abstract":"Participatory budgeting (PB) is a democratic approach to allocating municipal spending that has been adopted in many places in recent years, including in Chicago. Current PB voting resembles a ballot where residents are asked which municipal projects, such as school improvements and road repairs, to fund with a limited budget. In this work, we ask how interactive visualization can benefit PB by conducting a design probe-based interview study (N=13) with policy workers and academics with expertise in PB, urban planning, and civic HCI. Our probe explores how graphical elicitation of voter preferences and a dashboard of voting statistics can be incorporated into a realistic PB tool. Through qualitative analysis, we find that visualization creates opportunities for city government to set expectations about budget constraints while also granting their constituents greater freedom to articulate a wider range of preferences. However, using visualization to provide transparency about PB requires efforts to mitigate potential access barriers and mistrust. We call for more visualization professionals to help build civic capacity by working in and studying political systems.","authors":[{"affiliations":["University of Chicago, Chicago, United States"],"email":"kalea@uchicago.edu","is_corresponding":true,"name":"Alex Kale"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"danni6@uchicago.edu","is_corresponding":false,"name":"Danni Liu"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"mariagabrielaa@uchicago.edu","is_corresponding":false,"name":"Maria Gabriela Ayala"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"hwschwab@uchicago.edu","is_corresponding":false,"name":"Harper Schwab"},{"affiliations":["University of Washington, Seattle, United States","University of Utah, Salt Lake City, United States"],"email":"mcnutt.andrew@gmail.com","is_corresponding":false,"name":"Andrew M McNutt"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1281","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"What Can Interactive Visualization do for Participatory Budgeting in Chicago?","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1288","abstract":"Data tables are one of the most common ways in which people encounter data. Although mostly built with text and numbers, data tables have a spatial layout and often exhibit visual elements meant to facilitate their reading. Surprisingly, there is an empirical knowledge gap on how people read and use tables and how different visual aids affect people's ability to use them. In this work, we seek to address this vacuum through a controlled study. We asked participants to repeatedly perform four different tasks with tables in four table representation conditions (plain tables, tables with zebra striping, tables with cell background color encoding cell value, and tables with background bar length in a cell encoding cell value). We analyzed completion time, error rate, gaze-tracking data, mouse movement and participant preferences. We found that visual encodings help for finding maximum values (especially color), but not as much as zebra striping helps in a complex task (comparison of proportional differences). We also characterize typical human behavior for the different tasks. These findings can inform the design of tables and research directions for improving presentation of data in tabular form.","authors":[{"affiliations":["University of Victoria, Victoria, Canada"],"email":"yongfengji@uvic.ca","is_corresponding":false,"name":"YongFeng Ji"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"nacenta@gmail.com","is_corresponding":false,"name":"Miguel A Nacenta"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1288","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"The Effect of Visual Aids on Reading Numeric Data Tables","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1290","abstract":"Visualization linters are end-user facing evaluators that automatically identify potential chart issues. These spell-checker like systems offer a blend of interpretability and customization that is not found in other forms of automated assistance. However, existing linters do not model context and have primarily targeted users who do not need assistance, resulting in obvious---even annoying---advice. We investigate these issues within the domain of color palette design, which serves as a microcosm of visualization design concerns. We contribute a GUI-based color palette linter as a design probe that covers perception, accessibility, context, and other design criteria, and use it to explore visual explanations, integrated fixes, and user-defined linting rules. Through a formative interview study and theory-driven analysis, we find that linters can be meaningfully integrated into graphical contexts thereby addressing many of their core issues. We discuss implications for integrating linters into visualization tools, developing improved assertion languages, and supporting end-user tunable advice---all laying the groundwork for more effective visualization linters in any context.","authors":[{"affiliations":["University of Washington, Seattle, United States","University of Utah, Salt Lake City, United States"],"email":"mcnutt.andrew@gmail.com","is_corresponding":true,"name":"Andrew M McNutt"},{"affiliations":["University of Washington, Seattle, United States"],"email":"maureen.stone@gmail.com","is_corresponding":false,"name":"Maureen Stone"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1290","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Mixing Linters with GUIs: A Color Palette Design Probe","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1291","abstract":"Emotion is an important factor to consider when designing visualizations as it can impact the amount of trust viewers place in a visualization, how well they can retrieve information and understand the underlying data, and how much they engage with or connect to a visualization. We conducted five crowdsourced experiments to quantify the effects of color, chart type, data trend, data variability and data density on emotion (measured through self-reported arousal and valence). Results from our experiments show that there are multiple design elements which influence the emotion induced by a visualization and, more surprisingly, that certain data characteristics influence the emotion of viewers even when the data has no meaning. In light of these findings, we offer guidelines on how to use color, scale, and chart type to counterbalance and emphasize the emotional impact of immutable data characteristics.","authors":[{"affiliations":["University of Waterloo, Waterloo, Canada","University of Victoria, Victoria, Canada"],"email":"cartergblair@gmail.com","is_corresponding":false,"name":"Carter Blair"},{"affiliations":["University of Victoria, Victoira, Canada","Delft University of Technology, Delft, Netherlands"],"email":"xiyao.wang23@gmail.com","is_corresponding":false,"name":"Xiyao Wang"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1291","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1295","abstract":"Annotations play a vital role in highlighting critical aspects of visualizations, aiding in data externalization and exploration, collaborative data analysis, and visual storytelling. However, despite their widespread use, we identified a lack of a design space for common practices for annotations. In this paper, we evaluated over 1,800 static annotated charts to understand how people annotate visualizations in practice. Through qualitative coding of these diverse real-world annotated charts, we explore three primary aspects of annotation usage patterns: analytic purposes for chart annotations (e.g., present, identify, summarize, or compare data features), mechanisms for chart annotations (e.g., types and combinations of annotations used, frequency of different annotation types across chart types, etc.), and the data source used to generate the annotations. We then synthesized our findings into a design space of annotations, highlighting key design choices for chart annotations. We presented three case studies illustrating our design space as a practical framework for chart annotations to enhance the communication of visualization insights.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States","University of Utah, Salt Lake City, United States"],"email":"dilshadur@sci.utah.edu","is_corresponding":true,"name":"Md Dilshadur Rahman"},{"affiliations":["University of Oklahoma, Norman, United States","University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":false,"name":"Ghulam Jilani Quadri"},{"affiliations":["University of South Florida , Tampa, United States","University of South Florida , Tampa, United States"],"email":"bdoppalapudi@usf.edu","is_corresponding":false,"name":"Bhavana Doppalapudi"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States","University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"},{"affiliations":["University of Utah, Salt Lake City, United States","University of Utah, Salt Lake City, United States"],"email":"paul.rosen@utah.edu","is_corresponding":false,"name":"Paul Rosen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1295","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1302","abstract":"We present the results of an exploratory study on how pairs interact with speech commands and touch gestures on a wall-sized display during a collaborative sensemaking task. Previous work has shown that speech commands, alone or in combination with other input modalities, can support visual data exploration by individuals. However, it is still unknown whether and how speech commands can be used in collaboration, and for what tasks. To answer these questions, we developed a functioning prototype that we used as a technology probe. We conducted an in-depth exploratory study with 20 participants (10 pairs) to analyze their interaction choices, the interplay between the input modalities, and their collaboration. While touch was the most used modality, we found that participants preferred speech commands for global operations, used them for distant interaction, and that speech interaction contributed to the awareness of the partner\u2019s actions. Furthermore, the likelihood of using speech commands during collaboration was related to the personality trait of agreeableness. Regarding collaboration styles, participants interacted with speech equally often whether they were in loosely or closely coupled collaboration. While the partners stood closer to each other during close collaboration, they did not walk away from their partner to use speech commands. From our findings, we derive and contribute a set of design considerations for collaborative and multimodal interactive data analysis systems.","authors":[{"affiliations":["University of Bremen, Bremen, Germany","University of Bremen, Bremen, Germany"],"email":"molina@uni-bremen.de","is_corresponding":true,"name":"Gabriela Molina Le\u00f3n"},{"affiliations":["LISN, Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"anastasia.bezerianos@universite-paris-saclay.fr","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":["Inria, Palaiseau, France"],"email":"olivier.gladin@inria.fr","is_corresponding":false,"name":"Olivier Gladin"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1302","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1307","abstract":"Building information modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, building energy modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building\u2019s energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identification, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and understanding throughout the conversion process. By evaluating user feedback, we could show that BEMTrace can solve domain-specific tasks.","authors":[{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"walch@vrvis.at","is_corresponding":false,"name":"Andreas Walch"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"szabo@vrvis.at","is_corresponding":false,"name":"Attila Szabo"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"hs@vrvis.at","is_corresponding":false,"name":"Harald Steinlechner"},{"affiliations":["Independent Researcher, Vienna, Austria"],"email":"thomas@ortner.fyi","is_corresponding":false,"name":"Thomas Ortner"},{"affiliations":["Institute of Visual Computing "," Human-Centered Technology, Vienna, Austria"],"email":"groeller@cg.tuwien.ac.at","is_corresponding":false,"name":"Eduard Gr\u00f6ller"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"johanna.schmidt@vrvis.at","is_corresponding":true,"name":"Johanna Schmidt"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1307","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1309","abstract":"Visualizations play a critical role in validating and improving statistical models. However, the design space of model check visualizations is not well understood, making it difficult for authors to explore and specify effective graphical model checks. VMC defines a model check visualization using four components: (1) samples of distributions of checkable quantities generated from the model, including predictive distributions for new data and distributions of model parameters; (2) transformations on observed data to facilitate comparison; (3) visual representations of distributions; and (4) layouts to facilitate comparing model samples and observed data. We contribute an implementation of VMC as an R package. We validate VMC by reproducing a set of canonical model check examples, and show how using VMC to generate model checks reduces the edit distance between visualizations relative to existing visualization toolkits. The findings of an interview study with three expert modelers who used VMC highlight challenges and opportunities for encouraging exploration of correct, effective model check visualizations.","authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"ziyangguo1030@gmail.com","is_corresponding":true,"name":"Ziyang Guo"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"kalea@uchicago.edu","is_corresponding":false,"name":"Alex Kale"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"jhullman@northwestern.edu","is_corresponding":false,"name":"Jessica Hullman"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1309","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"VMC: A Grammar for Visualizing Statistical Model Checks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1316","abstract":"We apply an approach from cognitive linguistics by mapping Conceptual Metaphor Theory (CMT) to the visualization domain to address patterns of visual conceptual metaphors that are often used in science infographics. Metaphors play an essential part in visual communication and are frequently employed to explain complex concepts. However, their use is often based on intuition, rather than following a formal process. At present, we lack tools and language for understanding and describing metaphor use in visualization to the extent where taxonomy and grammar could guide the creation of visual components, e.g., infographics. Our classification of the visual conceptual mappings within scientific representations is based on the breakdown of visual components in existing scientific infographics. We demonstrate the development of this mapping through a detailed analysis of data collected from four domains (biomedicine, climate, space, and anthropology) that represent a diverse range of visual conceptual metaphors used in the visual communication of science. This work allows us to identify patterns of visual conceptual metaphor use within the domains, resolve ambiguities about why specific conceptual metaphors are used, and develop a better overall understanding of visual metaphor use in scientific infographics. Our analysis shows that ontological and orientational conceptual metaphors are the most widely applied to translate complex scientific concepts. To support our findings we developed a visual exploratory tool based on the collected database that places the individual infographics on a spatio-temporal scale and illustrates the breakdown of visual conceptual metaphors.","authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"hana.pokojna@gmail.com","is_corresponding":true,"name":"Hana Pokojn\u00e1"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["University of Rostock, Rostock, Germany"],"email":"stefan.bruckner@gmail.com","is_corresponding":false,"name":"Stefan Bruckner"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"},{"affiliations":["University of Bergen, Bergen, Norway","Haukeland University Hospital, University of Bergen, Bergen, Norway"],"email":"laura.garrison@uib.no","is_corresponding":false,"name":"Laura Garrison"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1316","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1318","abstract":"In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models' analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments--from initial exploration to detailed analysis--we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates their applicability in addressing the pressing concern of misleading charts.","authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"yhload@cse.ust.hk","is_corresponding":true,"name":"Leo Yu-Ho Lo"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1318","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1325","abstract":"Dynamic data visualizations can convey large amounts of information over time, such as using motion to depict changes in data values for multiple entities. Such dynamic displays put a demand on our visual processing capacities, yet our perception of motion is limited. When tracking multiple objects across space and time, humans can typically track up to four objects, and the capacity is even lower if we also need to remember the history of the objects\u2019 features. Several techniques have been shown to improve the processing of dynamic displays. Staging the animation to sequentially show steps in a transition and tracing object movement by displaying trajectory histories can increase processing by reducing the cognitive load. In this paper, We examine the effectiveness of staging and tracing in dynamic displays. We showed participants animated line charts depicting the movements of lines and asked them to identify the line with the highest mean and variance. We manipulated the animation to display the lines with or without staging, tracing and history, and compared the results to a static chart as a control. Results showed that tracing and staging are preferred by participants, and improve their performance in mean and variance tasks respectively. The preferred display time 3 times shorter when staging is used. Also, encoding animation speed with mean and variance in congruent tasks is associated with higher accuracy. These findings help inform real-world best practices for building dynamic displays that leverage the strength of humans' visual processing.","authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"shu343@gatech.edu","is_corresponding":true,"name":"Songwen Hu"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"ouxunjiang@u.northwestern.edu","is_corresponding":false,"name":"Ouxun Jiang"},{"affiliations":["Dolby Laboratories Inc., San Francisco, United States"],"email":"jcr@dolby.com","is_corresponding":false,"name":"Jeffrey Riedmiller"},{"affiliations":["Georgia Tech, Atlanta, United States","University of Massachusetts Amherst, Amherst, United States"],"email":"cxiong@gatech.edu","is_corresponding":false,"name":"Cindy Xiong Bearfield"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1325","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1326","abstract":"Evaluating the quality of text responses generated by large language models (LLMs) poses unique challenges compared to traditional machine learning. While automatic side-by-side evaluation has emerged as a promising approach, LLM developers face scalability and interpretability challenges in analyzing these evaluation results. In this paper, we present LLM Comparator, a novel visual analytics tool for interactively analyzing results from side-by-side evaluation of LLMs. The tool provides users with interactive workflows to understand when and why a model performs better or worse than a baseline model, and how the responses from two models differ qualitatively. We iteratively designed and developed the tool by closely working with researchers and engineers at a large technology company. Qualitative feedback from users highlights that the tool facilitates in-depth analysis of individual examples while enabling users to visually overview and flexibly slice data. This empowers users to identify undesirable patterns, formulate hypotheses about model behavior, and gain insights for model improvement.","authors":[{"affiliations":["Google, Atlanta, United States"],"email":"minsuk.kahng@gmail.com","is_corresponding":true,"name":"Minsuk Kahng"},{"affiliations":["Google Research, Seattle, United States"],"email":"iftenney@google.com","is_corresponding":false,"name":"Ian Tenney"},{"affiliations":["Google Research, Cambridge, United States"],"email":"mahimap@google.com","is_corresponding":false,"name":"Mahima Pushkarna"},{"affiliations":["Google Research, Pittsburgh, United States"],"email":"lxieyang.cmu@gmail.com","is_corresponding":false,"name":"Michael Xieyang Liu"},{"affiliations":["Google Research, Cambridge, United States"],"email":"jwexler@google.com","is_corresponding":false,"name":"James Wexler"},{"affiliations":["Google, Cambridge, United States"],"email":"ereif@google.com","is_corresponding":false,"name":"Emily Reif"},{"affiliations":["Google Research, Mountain View, United States"],"email":"kallarackal@google.com","is_corresponding":false,"name":"Krystal Kallarackal"},{"affiliations":["Google Research, Seattle, United States"],"email":"minsuk.cs@gmail.com","is_corresponding":false,"name":"Minsuk Chang"},{"affiliations":["Google, Cambridge, United States"],"email":"michaelterry@google.com","is_corresponding":false,"name":"Michael Terry"},{"affiliations":["Google, Paris, France"],"email":"ldixon@google.com","is_corresponding":false,"name":"Lucas Dixon"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1326","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1329","abstract":"The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students' learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students' interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master's level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students' interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT's responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system's effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz's capacity to enhance educators' insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions.","authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"zchendf@connect.ust.hk","is_corresponding":true,"name":"Zixin Chen"},{"affiliations":["The Hong Kong University of Science and Technology, Sai Kung, China"],"email":"csejiachenw@ust.hk","is_corresponding":false,"name":"Jiachen Wang"},{"affiliations":["Texas A","M University, College Station, United States"],"email":"xiameng9355@gmail.com","is_corresponding":false,"name":"Meng Xia"},{"affiliations":["The Hong Kong University of Science and Technology, Kowloon, Hong Kong"],"email":"kshigyo@connect.ust.hk","is_corresponding":false,"name":"Kento Shigyo"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"dliuak@connect.ust.hk","is_corresponding":false,"name":"Dingdong Liu"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"rzhangab@connect.ust.hk","is_corresponding":false,"name":"Rong Zhang"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1329","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1332","abstract":"Translating natural language to visualization (NL2VIS) has shown great promise for visual data analysis, but it remains a challenging task that requires multiple low-level implementations, such as natural language processing and visualization design. Recent advancements in pre-trained large language models (LLMs) are opening new avenues for generating visualizations from natural language. However, the lack of a comprehensive and reliable benchmark hinders our understanding of LLMs\u2019 capabilities in visualization generation. In this paper, we address this gap by proposing a new NL2VIS benchmark called VisEval. Firstly, we introduce a high-quality and large-scale dataset. This dataset includes 2,524 representative queries covering 146 databases, paired with accurately labeled ground truths. Secondly, we advocate for a comprehensive automated evaluation methodology covering multiple dimensions, including validity, legality, and readability. By systematically scanning for potential issues with a number of heterogeneous checkers, VisEval provides reliable and trustworthy evaluation outcomes. We run VisEval on a series of state-of-the-art LLMs. Our evaluation reveals prevalent challenges and delivers essential insights for future advancements.","authors":[{"affiliations":["Microsoft Research, Shanghai, China"],"email":"christy05.chen@gmail.com","is_corresponding":true,"name":"Nan Chen"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"scottyugochang@gmail.com","is_corresponding":false,"name":"Yuge Zhang"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"jiahangxu@microsoft.com","is_corresponding":false,"name":"Jiahang Xu"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"rk.ren@outlook.com","is_corresponding":false,"name":"Kan Ren"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"yuqyang@microsoft.com","is_corresponding":false,"name":"Yuqing Yang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1332","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"VisEval: A Benchmark for Data Visualization in the Era of Large Language Models","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1333","abstract":"Data videos increasingly becoming a popular data storytelling form represented by visual and audio integration. In recent years, more and more researchers have explored many narrative structures for effective and attractive data storytelling. Meanwhile, the Hero's Journey provides a classic narrative framework specific to the Hero's story that has been adopted by various mediums. There are continuous discussions about applying Hero's Journey to data stories. However, so far, little systematic and practical guidance on how to create a data video for a specific story type like the Hero's Journey, as well as how to manipulate its sound and visual designs simultaneously. To fulfill this gap, we first identified 48 data videos aligned with the Hero's Journey as the common storytelling from 109 high-quality data videos. Then, we examined how existing practices apply Hero's Journey for creating data videos. We coded the 48 data videos in terms of the narrative stages, sound design, and visual design according to the Hero's Journey structure. Based on our findings, we proposed a design space to provide practical guidance on the narrative, visual, and sound custom design for different narrative segments of the hero's journey (i.e., Departure, Initiation, Return) through data video creation. To validate our proposed design space, we conducted a user study where 20 participants were invited to design data videos with and without our design space guidance, which was evaluated by two experts. Results show that our design space provides useful and practical guidance for data storytellers effectively creating data videos with the Hero's Journey.","authors":[{"affiliations":["The Hong Kong University of Science and Technology, Guangzhou, China"],"email":"zwei302@connect.hkust-gz.edu.cn","is_corresponding":true,"name":"Zheng Wei"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"xxubq@connect.ust.hk","is_corresponding":false,"name":"Xian Xu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1333","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Telling Data Stories with the Hero\u2019s Journey: Design Guidance for Creating Data Videos","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1342","abstract":"Genomics experts rely on visualization to extract and share insights from complex and large-scale datasets. Beyond off-the-shelf tools for data exploration, there is an increasing need for platforms that aid experts in authoring customized visualizations for both exploration and communication of insights. A variety of interactive techniques have been proposed for authoring data visualizations, such as template editing, shelf configuration, natural language input, and code editors. However, it remains unclear how genomics experts create visualizations and which techniques best support their visualization tasks and needs. To address this gap, we conducted two user studies with genomics researchers: (1) semi-structured interviews (n=20) to identify the tasks, user contexts, and current visualization authoring techniques and (2) an exploratory study (n=13) using visual probes to elicit users\u2019 intents and desired techniques when creating visualizations. Our contributions include (1) a characterization of how visualization authoring is currently utilized in genomics visualization, identifying limitations and benefits in light of common criteria for authoring tools, and (2) generalizable and actionable design implications for genomics visualization authoring tools based on our findings on task- and user-specific usefulness of authoring techniques.","authors":[{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.v.d.brandt@tue.nl","is_corresponding":true,"name":"Astrid van den Brandt"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"sehi_lyi@hms.harvard.edu","is_corresponding":false,"name":"Sehi L'Yi"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"huyen_nguyen@hms.harvard.edu","is_corresponding":false,"name":"Huyen N. Nguyen"},{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.vilanova@tue.nl","is_corresponding":false,"name":"Anna Vilanova"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1342","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1351","abstract":"As basketball\u2019s popularity surges, fans often find themselves confused and overwhelmed by the rapid game pace and complexity. Basketball tactics, involving a complex series of actions, require substantial knowledge to be fully understood. This complexity leads to a need for additional information and explanation, which can distract fans from the game. To tackle these challenges, we present Sportify, a Visual Question Answering system that integrates narratives and embedded visualization for demystifying basketball tactical questions, aiding fans in understanding various game aspects. We propose three novel action visualizations (i.e., Pass, Cut, and Screen) to demonstrate critical action sequences. To explain the reasoning and logic behind players\u2019 actions, we leverage a large-language model (LLM) to generate narratives. We adopt a storytelling approach for complex scenarios from both first and third-person perspectives, integrating action visualizations. We evaluated Sportify with basketball fans to investigate its impact on understanding of tactics, and how different personal perspectives of narratives impact the understanding of complex tactic with action visualizations. Our evaluation with basketball fans demonstrates Sportify\u2019s capability to deepen tactical insights and amplify the viewing experience. Furthermore, third-person narration assists people in getting in-depth game explanations while first-person narration enhances fans\u2019 game engagement.","authors":[{"affiliations":["Harvard University, Allston, United States"],"email":"chungyi347@gmail.com","is_corresponding":true,"name":"Chunggi Lee"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"mlin@g.harvard.edu","is_corresponding":false,"name":"Tica Lin"},{"affiliations":["University of Minnesota-Twin Cities, Minneapolis, United States"],"email":"ztchen@umn.edu","is_corresponding":false,"name":"Chen Zhu-Tian"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"pfister@seas.harvard.edu","is_corresponding":false,"name":"Hanspeter Pfister"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1351","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1363","abstract":"Data visualization aids in making data analysis more intuitive and in-depth, with widespread applications in fields such as biology, finance, and medicine. For massive and continuously growing streaming time series data, these data are typically visualized in the form of line charts, but the data transmission puts significant pressure on the network, leading to visualization lag or even fail to render completely. This paper proposes a universal sampling algorithm FPCS, which retains feature points from continuously received streaming time series data, compensates for the frequent fluctuating feature points, and aims to achieve efficient visualization. This algorithm bridges the gap in sampling for streaming time series data. The algorithm has several advantages: (1) It optimizes the sampling results by compensating for fewer feature points, retaining the visualization features of the original data very well, ensuring high-quality sampled data; (2) The execution time is the shortest compared to similar existing algorithms; (3) It has an almost negligible space overhead; (4) The data sampling process does not depend on the overall data; (5) This algorithm can be applied to infinite streaming data and finite static data.","authors":[{"affiliations":["China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China"],"email":"3271961659@qq.com","is_corresponding":true,"name":"Hongyan Li"},{"affiliations":["China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China"],"email":"ustcboy@outlook.com","is_corresponding":false,"name":"Bo Yang"},{"affiliations":["China Nanhu Academy of Electronics and Information Technology, Jiaxing, China"],"email":"caiyansong@cnaeit.com","is_corresponding":false,"name":"Yansong Chua"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1363","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1368","abstract":"Synthetic Lethal (SL) relationships, although rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there remains a persistent need among domain experts for interpretive paths and mechanism explorations that better harmonize with domain-specific knowledge, particularly due to the significant costs involved in experimentation. To address this gap, we propose an iterative Human-AI collaborative framework comprising two key components: 1)Human-Engaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2)Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids domain experts in organizing and comparing prediction results and interpretive paths across different granularities, thereby uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, thereby enhancing expert involvement and intervention to build trust. This framework, facilitated by SLInterpreter, ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. Subsequently, we evaluate the efficacy of the framework through a case study and expert interviews.","authors":[{"affiliations":["Shanghaitech University, Shanghai, China"],"email":"jianghr2023@shanghaitech.edu.cn","is_corresponding":true,"name":"Haoran Jiang"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"shishh2023@shanghaitech.edu.cn","is_corresponding":false,"name":"Shaohan Shi"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"zhangshh2@shanghaitech.edu.cn","is_corresponding":false,"name":"Shuhao Zhang"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"zhengjie@shanghaitech.edu.cn","is_corresponding":false,"name":"Jie Zheng"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1368","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1391","abstract":"In volume visualization, visualization synthesis has attracted much attention due to its ability to generate novel visualizations without following the conventional rendering pipeline. However, existing solutions based on generative adversarial networks often require many training images and take significant training time. Still, issues such as low quality, consistency, and flexibility persist. This paper introduces StyleRF-VolVis, an innovative style transfer framework for expressive volume visualization (VolVis) via neural radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its ability to accurately separate the underlying scene geometry (i.e., content) and color appearance (i.e., style), conveniently modify color, opacity, and lighting of the original rendering while maintaining visual content consistency across the views, and effectively transfer arbitrary styles from reference images to the reconstructed 3D scene. To achieve these, we design a base NeRF model for scene geometry extraction, a palette color network to classify regions of the radiance field for photorealistic editing, and an unrestricted color network to lift the color palette constraint via knowledge distillation for non-photorealistic editing. We demonstrate the superior quality, consistency, and flexibility of StyleRF-VolVis by experimenting with various volume rendering scenes and reference images and comparing StyleRF-VolVis against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF and SNeRF) style rendering solutions.","authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"ktang2@nd.edu","is_corresponding":true,"name":"Kaiyuan Tang"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1391","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1393","abstract":"This paper discusses challenges and design strategies in responsive design for thematic maps in information visualization. Thematic maps pose a number of unique challenges for responsiveness, such as inflexible aspect ratios that do not easily adapt to varying screen dimensions, or densely clustered visual elements in urban areas becoming illegible at smaller scales. However, design guidance on how to best address these issues is currently lacking. We conducted design sessions with eight professional designers and developers of web-based thematic maps for information visualization. Participants were asked to redesign a given map for various screen sizes and aspect ratios and to describe their reasoning for when and how they adapted the design. We report general observations of practitioners\u2019 motivations, decision-making processes, and personal design frameworks. We then derive seven challenges commonly encountered in responsive map design, and 17 strategies to address them, such as repositioning elements, segmenting the map, or using alternative visualizations. We compile these challenges and strategies into an illustrated cheat sheet targeted at anyone designing or learning to design responsive maps. The cheat sheet is available online: https://responsive-vis.github.io/map-cheat-sheet.","authors":[{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"sarah.schoettler@ed.ac.uk","is_corresponding":true,"name":"Sarah Sch\u00f6ttler"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"uhinrich@ed.ac.uk","is_corresponding":false,"name":"Uta Hinrichs"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1393","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1394","abstract":"This paper presents discursive patinas, a technique to visualize discussions onto data visualizations, inspired by how people leave traces in the physical world. While data visualizations are widely discussed in online communities and social media, comments tend to be displayed separately from the visualization. We lack ways to relate these discussions to the content of the visualization, e.g., to situate comments, explain visual patterns, or question assumptions. In our visualization annotation interface, users can designate areas within the visualization to, e.g., highlight specific visual marks (anchors), attach textual comments, and add category labels, likes, and replies. By coloring and styling these designated areas, a meta visualization emerges, showing what and where people comment and annotate. These patinas show regions of heavy discussions, recent commenting activity, and the distribution of questions, suggestions, or personal stories. To study how people use anchors to discuss visualizations and understand if and how information in patinas influence people's understanding of the discussion, we ran workshops with 90 participants including students, domain experts, and visualization researchers. Our results show that discursive patinas improve the ability to navigate discussions and guide people to comments that help understand, contextualize, or scrutinize the visualization. We discuss the potential of the technique to support discursive engagements, including critical readings of visualizations, design feedback, and feminist approaches to data visualization.","authors":[{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom","Potsdam University of Applied Sciences, Potsdam, Germany"],"email":"tobias.kauer@fh-potsdam.de","is_corresponding":true,"name":"Tobias Kauer"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"derya.akbaba@liu.se","is_corresponding":false,"name":"Derya Akbaba"},{"affiliations":["University of Applied Sciences Potsdam, Potsdam, Germany"],"email":"doerk@fh-potsdam.de","is_corresponding":false,"name":"Marian D\u00f6rk"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1394","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Discursive Patinas: Anchoring Discussions in Data Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1395","abstract":"Onboarding a user to a visualization dashboard entails explaining its various components, including the chart types used, the data loaded, and the interactions provided. Authoring such an onboarding experience is time-consuming and requires significant knowledge, and little guidance exists on how best to do this. End-users being onboarded to a new dashboard can be either confused and overwhelmed, or disinterested and disengaged, depending on the user\u2019s expertise. We propose interactive dashboard tours (d-tours) as semi-automated onboarding experiences for variable user expertise that preserve the user\u2019s agency, interest, and engagement. Our interactive tours concept draws from open-world game design to give the user freedom in choosing their path in the onboarding. We have implemented the concept in a tool called D-TOUR PROTOTYPE that allows authors to craft custom and interactive dashboard tours from scratch or using automatic templates. Automatically generated tours can still be customized to use different media (such as video, audio, or highlighting) or new narratives to produce a tailored onboarding experience for individual users or groups. We demonstrate the usefulness of interactive dashboard tours through use cases and expert interviews. The evaluation shows that the authors find the automation in the DTour prototype helpful and time-saving and the users find it engaging and intuitive. This paper and all supplemental materials are available at \\url{https://osf.io/6fbjp/}.","authors":[{"affiliations":["Pro2Future GmbH, Linz, Austria","Johannes Kepler University, Linz, Austria"],"email":"vaishali.dhanoa@pro2future.at","is_corresponding":true,"name":"Vaishali Dhanoa"},{"affiliations":["Johannes Kepler University, Linz, Austria"],"email":"andreas.hinterreiter@jku.at","is_corresponding":false,"name":"Andreas Hinterreiter"},{"affiliations":["Johannes Kepler University, Linz, Austria"],"email":"vanessa.fediuk@jku.at","is_corresponding":false,"name":"Vanessa Fediuk"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"},{"affiliations":["Institute of Visual Computing "," Human-Centered Technology, Vienna, Austria"],"email":"groeller@cg.tuwien.ac.at","is_corresponding":false,"name":"Eduard Gr\u00f6ller"},{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"marc.streit@jku.at","is_corresponding":false,"name":"Marc Streit"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1395","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1414","abstract":"Visualization designers often rely on examples to explore the space of possible designs, yet we have little insight into how examples shape data visualization design outcomes. While the effects of examples have been studied in other disciplines, such as web design or engineering, the results are not readily applicable to visualization design due to inconsistencies in findings and challenges unique to visualization design. Towards bridging this gap, we conduct an exploratory experiment involving 32 data visualization designers focusing on the influence of five factors (timing, quantity, diversity, data topic similarity, and data schema similarity) on objectively measurable design outcomes (e.g., numbers of designs and idea transfers). Our quantitative analysis shows that when examples are introduced after initial brainstorming, designers curate examples with topics less similar to the dataset they are working on and produce more designs with a high variation in visualization components. Also, designers copy more ideas from examples with higher data schema similarities. Our qualitative analysis of participants\u2019 thought processes provides insights into why designers incorporate examples into their designs, revealing potential factors that have not been previously investigated. Finally, we discuss how our results inform future work on quantifying designs, improving measures of effectiveness, and supporting example-based visualization design. All supplementary materials are available at https://osf.io/sbp2k/?view_only=ca14af497f5845a0b1b2c616699fefc5","authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"hbako@umd.edu","is_corresponding":true,"name":"Hannah K. Bako"},{"affiliations":["The University of Texas at Austin, Austin, United States"],"email":"xinyi.liu@utexas.edu","is_corresponding":false,"name":"Xinyi Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"gko1@terpmail.umd.edu","is_corresponding":false,"name":"Grace Ko"},{"affiliations":["Human Data Interaction Lab, College Park, United States"],"email":"hsong02@cs.umd.edu","is_corresponding":false,"name":"Hyemi Song"},{"affiliations":["University of Washington, Seattle, United States"],"email":"leibatt@cs.washington.edu","is_corresponding":false,"name":"Leilani Battle"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1414","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Unveiling How Examples Shape Data Visualization Design Outcomes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1416","abstract":"Various data visualization downstream applications such as reverse engineering and interactive authoring require a vocabulary that describes the structure of visualization scenes and the procedure to manipulate them. A few scene abstractions have been proposed, but they are restricted to specific applications for a limited set of visualization types. A unified and expressive model of data visualization scenes for different downstream applications has been missing. To fill this gap, we present Manipulable Semantic Components (MSC), a computational representation of data visualization scenes, to support applications in scene understanding and augmentation. MSC consists of two parts: a unified object model describing the structure of a visualization scene in terms of semantic components, and a set of operations to generate and modify the scene components. We demonstrate the benefits of MSC in three applications: visualization authoring, visualization deconstruction and reuse, and animation specification.","authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":true,"name":"Zhicheng Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"cchen24@umd.edu","is_corresponding":false,"name":"Chen Chen"},{"affiliations":["University of Maryland, College Park, United States"],"email":"hookerj100@gmail.com","is_corresponding":false,"name":"John Hooker"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1416","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1422","abstract":"Visualization items\u2014factual questions about visualizations that ask viewers to accomplish visualization tasks\u2014are regularly used in the field of information visualization as educational and evaluative materials. For example, researchers of visualization literacy require large, diverse banks of items to conduct studies where the same skill is measured repeatedly on the same participants. Yet, generating a large number of high-quality, diverse items requires significant time and expertise. To address the critical need for a large number of diverse visualization items in education and research, this paper investigates the potential for large language models (LLMs) to automate the generation of multiple-choice visualization items. Through an iterative design process, we develop an LLM-based pipeline, the VILA (Visualization Items Generated by Large LAnguage Models) pipeline, for efficiently generating visualization items that measure people\u2019s ability to accomplish visualization tasks. We use the VILA pipeline to generate 1,404 candidate items across 12 chart types and 13 visualization tasks. In collaboration with 11 visualization experts, we develop an evaluation rulebook which we then use to rate the quality of all candidate items. The result is a final bank, the VILA bank, of \u223c1,100 items. From this evaluation, we also identify and classify current limitations of LLMs in generating visualization items, and discuss the role of human oversight in ensuring quality. In addition, we demonstrate an application of our work by creating a visualization literacy test, VILA-VLAT, which measures people\u2019s ability to complete a diverse set of tasks on various types of visualizations; to show the potential of this application, we assess the convergent validity of VILA-VLAT by comparing it to the existing test VLAT via an online study (R = 0.70). Lastly, we discuss the application areas of the VILA pipeline and the VILA bank and provide practical recommendations for their use. All supplemental materials are available at https://osf.io/ysrhq/?view_only=e31b3ddf216e4351bb37bcedf744e9d6.","authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"yuancui2025@u.northwestern.edu","is_corresponding":true,"name":"Yuan Cui"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"wanqian.ge@northwestern.edu","is_corresponding":false,"name":"Lily W. Ge"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"yding5@wpi.edu","is_corresponding":false,"name":"Yiren Ding"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ltharrison@wpi.edu","is_corresponding":false,"name":"Lane Harrison"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"fumeng.p.yang@gmail.com","is_corresponding":false,"name":"Fumeng Yang"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1422","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Promises and Pitfalls: Using Large Language Models to Generate Visualization Items","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1425","abstract":"Comics have been shown to be an effective method for sequential data-driven storytelling, especially for dynamic graphs that change over time. However, manually creating a data-driven comic for a dynamic graph is currently time-consuming, complex, and error-prone. In this paper, we propose DG Comics, a novel comic authoring tool for dynamic graphs that allows users to semi-automatically build the comic and annotate it. The tool uses a hierarchical clustering algorithm that we newly developed for segmenting consecutive snapshots of the dynamic graph while preserving their chronological order. It also provides rich information on both individuals and communities extracted from dynamic graphs in multiple views, where users can explore dynamic graphs and choose what to tell in comics. For evaluation, we provide an example and report results from a user study and expert review.","authors":[{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"joohee@unist.ac.kr","is_corresponding":true,"name":"Joohee Kim"},{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"gusdnr0916@unist.ac.kr","is_corresponding":false,"name":"Hyunwook Lee"},{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"ducnm@unist.ac.kr","is_corresponding":false,"name":"Duc M. Nguyen"},{"affiliations":["Australian National University, Canberra, Australia"],"email":"minjeong.shin@anu.edu.au","is_corresponding":false,"name":"Minjeong Shin"},{"affiliations":["IBM Research, Cambridge, United States"],"email":"bumchul.kwon@us.ibm.com","is_corresponding":false,"name":"Bum Chul Kwon"},{"affiliations":["UNIST, Ulsan, Korea, Republic of"],"email":"sako@unist.ac.kr","is_corresponding":false,"name":"Sungahn Ko"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1425","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1427","abstract":"Numerical simulation serves as a cornerstone in scientific modeling, yet the process of fine-tuning simulation parameters poses significant challenges. Conventionally, parameter adjustment relies on extensive numerical simulations, data analysis, and expert insights, resulting in substantial computational costs and low efficiency. The emergence of deep learning in recent years has provided promising avenues for more efficient exploration of parameter spaces. However, existing approaches often lack intuitive methods for precise parameter adjustment and optimization. To tackle these challenges, we introduce ParamsDrag, a model that facilitates parameter space exploration through direct interaction with visualizations. Inspired by DragGAN, our ParamsDrag model operates in three steps. First, the generative component of ParamsDrag generates visualizations based on the input simulation parameters. Second, by directly dragging structure-related features in the visualizations, users can intuitively understand the controlling effect of different parameters. Third, with the understanding from the earlier step, users can steer ParamsDrag to produce dynamic visual outcomes. Through experiments conducted on real-world simulations and comparisons with state-of-the-art deep learning based approaches, we demonstrate the efficacy of our solution.","authors":[{"affiliations":["Computer Network Information Center, Chinese Academy of Sciences, Beijing, China","University of Chinese Academy of Sciences, Beijing, China"],"email":"liguan@sccas.cn","is_corresponding":true,"name":"Guan Li"},{"affiliations":["Beijing Forestry University, Beijing, China"],"email":"leo_edumail@163.com","is_corresponding":false,"name":"Yang Liu"},{"affiliations":["Computer Network Information Center, Chinese Academy of Sciences, Beijing, China"],"email":"sgh@sccas.cn","is_corresponding":false,"name":"Guihua Shan"},{"affiliations":["Chinese Academy of Sciences, Beijing, China"],"email":"chengshiyu@cnic.cn","is_corresponding":false,"name":"Shiyu Cheng"},{"affiliations":["Beijing Forestry University, Beijing, China"],"email":"weiqun.cao@126.com","is_corresponding":false,"name":"Weiqun Cao"},{"affiliations":["Visa Research, Palo Alto, United States"],"email":"junpeng.wang.nk@gmail.com","is_corresponding":false,"name":"Junpeng Wang"},{"affiliations":["National Taiwan Normal University, Taipei City, Taiwan"],"email":"caseywang777@gmail.com","is_corresponding":false,"name":"Ko-Chih Wang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1427","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1438","abstract":"Differential privacy ensures the security of individual privacy but poses challenges to data exploration processes because the limited privacy budget incapacitates the flexibility of exploration and the noisy feedback of data requests leads to confusing uncertainty. In this study, we take the lead in describing corresponding exploration scenarios, including underlying requirements and available exploration strategies. To facilitate practical applications, we propose a visual analysis approach to the formulation of exploration strategies. Our approach applies a reinforcement learning model to provide diverse suggestions for exploration strategies according to the exploration intent of users. A novel visual design for representing uncertainty in correlation patterns is integrated into our prototype system to support the proposed approach. Finally, we implemented a user study and two case studies. The results of these studies verified that our approach can help develop strategies that satisfy the exploration intent of users.","authors":[{"affiliations":["Nankai University, Tianjin, China"],"email":"wangxumeng@nankai.edu.cn","is_corresponding":true,"name":"Xumeng Wang"},{"affiliations":["Nankai University, Tianjin, China"],"email":"jiaoshuangcheng@mail.nankai.edu.cn","is_corresponding":false,"name":"Shuangcheng Jiao"},{"affiliations":["Arizona State University, Tempe, United States"],"email":"cbryan16@asu.edu","is_corresponding":false,"name":"Chris Bryan"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1438","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1446","abstract":"We are currently witnessing an increase in web-based, data-driven initiatives that explain complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. Many of these projects call themselves \"atlases\", a term that historically referred to collections of maps or scientific illustrations. To answer the question of what makes a \"visualization atlas\", we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of visualization atlases as an emerging format to present complex topics in a holistic, data-driven, and curated way through visualization, (2) a set of design patterns and design dimensions that led to (3) defining 5 visualization atlas genres, and (4) insights into the atlas creation from interviews. We found that visualization atlases are unique in that they combine exploratory visualization with narrative elements from data-driven storytelling and structured navigation mechanisms. They can act as a reference, communication or discovery tools targeting a wide range of audiences with different levels of domain knowledge. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases.","authors":[{"affiliations":["The University of Edinburgh, Edinburgh, United Kingdom"],"email":"jinrui.w@outlook.com","is_corresponding":true,"name":"Jinrui Wang"},{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"uhinrich@ed.ac.uk","is_corresponding":false,"name":"Uta Hinrichs"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1446","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1451","abstract":"We present a systematic review, an empirical study, and a first set of considerations for designing visualizations in motion, derived from a concrete scenario in which these visualizations were used to support a primary task. In practice, when viewers are confronted with embedded visualizations, they often have to focus on a primary task and can only quickly glance at a visualization showing rich, often dynamically updated, information. As such, the visualizations must be designed so as not to distract from the primary task, while at the same time being readable and useful for aiding the primary task. For example, in games, players who are engaged in a battle have to look at their enemies but also read the remaining health of their own game character from the health bar over their character's head. Many trade-offs are possible in the design of embedded visualizations in such dynamic scenarios, which we explore in-depth in this paper with a focus on user experience. We use video games as an example of an application context with a rich existing set of visualizations in motion. We begin our work with a systematic review of in-game visualizations in motion. Next, we conduct an empirical user study to investigate how different embedded visualizations in motion designs impact user experience. We conclude with a set of considerations and trade-offs for designing visualizations in motion more broadly as derived from what we learned about video games. All supplemental materials of this paper are available at osf.io/3v8wm/.","authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"yaolijie0219@gmail.com","is_corresponding":true,"name":"Lijie Yao"},{"affiliations":["Univerisit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"federicabucchieri@gmail.com","is_corresponding":false,"name":"Federica Bucchieri"},{"affiliations":["Carleton University, Ottawa, Canada"],"email":"dieselfish@gmail.com","is_corresponding":false,"name":"Victoria McArthur"},{"affiliations":["LISN, Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"anastasia.bezerianos@universite-paris-saclay.fr","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1451","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"User Experience of Visualizations in Motion: A Case Study and Design Considerations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1461","abstract":"This paper presents a practical approach for the optimization of topological simplification, a central pre-processing step for the analysis and visualization of scalar data. Given an input scalar field f and a set of \u201csignal\u201d persistence pairs to maintain, our approaches produces an output field g that is close to f and which optimizes (i) the cancellation of \u201cnon-signal\u201d pairs, while (ii) preserving the \u201csignal\u201d pairs. In contrast to pre-existing simplification approaches, our method is not restricted to persistence pairs involving extrema and can thus address a larger class of topological features, in particular saddle pairs in three-dimensional scalar data. Our approach leverages recent generic persistence optimization frameworks and extends them with tailored accelerations specific to the problem of topological simplification. Extensive experiments report substantial accelerations over these frameworks, thereby making topological simplification optimization practical for real-life datasets. Our work enables a direct visualization and analysis of the topologically simplified data, e.g., via isosurfaces of simplified topology (fewer components and handles). We apply our approach to the extraction of prominent filament structures in three-dimensional data. Specifically, we show that our pre-simplification of the data leads to practical improvements over standard topological techniques for removing filament loops. We also show how our framework can be used to repair genus defects in surface processing. Finally, we provide a C++ implementation for reproducibility purposes.","authors":[{"affiliations":["CNRS, Paris, France","SORBONNE UNIVERSITE, Paris, France"],"email":"mohamed.kissi@lip6.fr","is_corresponding":true,"name":"Mohamed KISSI"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"mathieu.pont@lip6.fr","is_corresponding":false,"name":"Mathieu Pont"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"josh@cs.arizona.edu","is_corresponding":false,"name":"Joshua A Levine"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1461","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"A Practical Solver for Scalar Data Topological Simplification","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1472","abstract":"Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, an approach for extracting and modeling visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines---DracoGPT-Rank and DracoGPT-Recommend---to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT models the preferences expressed by LLMs well, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantively diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and serve as a reliable and cost-effective stand-in for LLMs.","authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"wwill@cs.washington.edu","is_corresponding":true,"name":"Huichen Will Wang"},{"affiliations":["University of Washington, Seattle, United States"],"email":"mgord@cs.stanford.edu","is_corresponding":false,"name":"Mitchell L. Gordon"},{"affiliations":["University of Washington, Seattle, United States"],"email":"leibatt@cs.washington.edu","is_corresponding":false,"name":"Leilani Battle"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1472","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DracoGPT: Extracting Visualization Design Preferences from Large Language Models","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1474","abstract":"Recent advancements in Large Language Models (LLMs) and Prompt Engineering have made chatbot customization more accessible, significantly reducing barriers to tasks that previously required programming skills. However, prompt evaluation, especially at the dataset scale, remains complex due to the need to assess prompts across thousands of test instances within a dataset. Our study, based on a comprehensive literature review and pilot study, summarized five critical challenges in prompt evaluation. In response, we introduce a feature-oriented workflow for systematic prompt evaluation, focusing on text summarization. Our workflow advocates feature metrics such as complexity, formality, or naturalness, instead of using traditional quality metrics like ROUGE. This design choice enables a more user-friendly evaluation of prompts, as it guides users in sorting through the ambiguity inherent in natural language. To support this workflow, we introduce Awesum, a visual analytics system that facilitates identifying optimal prompt refinements through interactive visualizations, featuring a novel Prompt Comparator design that employs a BubbleSet-inspired design enhanced by dimensionality reduction techniques. We evaluate the effectiveness and general applicability of the system with practitioners from various domains and found that (1) our design helps overcome the learning curve for non-technical people to conduct a systematic evaluation, and (2) our feature-oriented workflow has the potential to generalize to other NLG and image-generation tasks. For future works, we advocate moving towards feature-oriented evaluation of LLM prompts and discuss unsolved challenges in terms of human-agent interaction.","authors":[{"affiliations":["University of California Davis, Davis, United States"],"email":"ytlee@ucdavis.edu","is_corresponding":true,"name":"Sam Yu-Te Lee"},{"affiliations":["University of California, Davis, Davis, United States"],"email":"abahukhandi@ucdavis.edu","is_corresponding":false,"name":"Aryaman Bahukhandi"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"dyuliu@ucdavis.edu","is_corresponding":false,"name":"Dongyu Liu"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"ma@cs.ucdavis.edu","is_corresponding":false,"name":"Kwan-Liu Ma"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1474","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1480","abstract":"We propose the notion of Attention-aware Visualizations (AAVs) that track the user's perception of a visual representation over time and feed this information back to the visualization.This idea is particularly useful for ubiquitous and immersive analytics where knowing which embedded visualizations the user is looking at can be used to make visualizations react appropriately to the user's attention: for example, by highlighting data the user has not yet seen. We can separate the approach into three components: (1) measuring the user's gaze on a visualization and its parts; (2) tracking the user's attention over time; and (3) reactively modifying the visual representation based on the current attention metric. In this paper, we present two separate implementations of AAV: a 2D numeric integration of attention for web-based visualizations that can use an embodied eye-tracker to capture the user's gaze, and a 3D implementation that uses the stencil buffer to track the visibility of each individual mark in a visualization. Both methods provide similar mechanisms for accumulating attention over time and changing the appearance of marks in response. We also present results from a controlled laboratory experiment studying different visual feedback mechanisms for attention.","authors":[{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"arvind@cs.au.dk","is_corresponding":true,"name":"Arvind Srinivasan"},{"affiliations":["Aarhus University, Aarhus N, Denmark"],"email":"johannes@ellemose.eu","is_corresponding":false,"name":"Johannes Ellemose"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.butcher@bangor.ac.uk","is_corresponding":false,"name":"Peter W. S. Butcher"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.ritsos@bangor.ac.uk","is_corresponding":false,"name":"Panagiotis D. Ritsos"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1480","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Attention-Aware Visualization: Tracking and Responding to User Perception Over Time","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1483","abstract":"Egocentric networks, often visualized as node-link diagrams, portray the complex relationship (link) dynamics between an entity (node) and others. However, common analytics tasks are multifaceted, encompassing interactions among four key aspects: strength, function, structure, and content. Current node-link visualization designs may fall short, focusing narrowly on certain aspects and neglecting the holistic, dynamic nature of egocentric networks. To bridge this gap, we introduce SpreadLine, a novel visualization framework designed to enable the visual exploration of egocentric networks from these four aspects at the microscopic level. Leveraging the intuitive appeal of storyline visualizations, SpreadLine adopts a storyline-based design to represent entities and their evolving relationships. We further encode essential topological information in the layout and condense the contextual information in a metro map metaphor, allowing for a more engaging and effective way to explore temporal and attribute-based information. To guide our work, with a thorough review of pertinent literature, we have distilled a task taxonomy that addresses the analytical needs specific to egocentric network exploration. Acknowledging the diverse analytical requirements of users, SpreadLine offers customizable encodings to enable users to tailor the framework for their tasks. We demonstrate the efficacy and general applicability of SpreadLine through three diverse real-world case studies and a usability study.","authors":[{"affiliations":["University of California, Davis, Davis, United States"],"email":"yskuo@ucdavis.edu","is_corresponding":true,"name":"Yun-Hsin Kuo"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"dyuliu@ucdavis.edu","is_corresponding":false,"name":"Dongyu Liu"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"ma@cs.ucdavis.edu","is_corresponding":false,"name":"Kwan-Liu Ma"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1483","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"SpreadLine: Visualizing Egocentric Dynamic Influence","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1487","abstract":"Referential gestures, or as termed in linguistics, {\\em deixis}, are an essential part of communication around data visualizations. Despite their importance, such gestures are often overlooked when documenting data analysis meetings. Transcripts, for instance, fail to capture gestures, and video recordings may not adequately capture or emphasize them. We introduce a novel method for documenting collaborative data meetings that treats deixis as a first-class citizen. Our proposed framework captures cursor-based gestural data along with audio and converts them into interactive documents. The framework leverages a large language model to identify word correspondences with gestures. These identified references are used to create context-based annotations in the resulting interactive document. We assess the effectiveness of our proposed method through a user study, finding that participants preferred our automated interactive documentation over recordings, transcripts, and manual note-taking. Furthermore, we derive a preliminary taxonomy of cursor-based deictic gestures from participant actions during the study. This taxonomy offers further opportunities for better utilizing cursor-based deixis in collaborative data analysis scenarios.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"hatch.on27@gmail.com","is_corresponding":true,"name":"Chang Han"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1487","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1488","abstract":"A year ago, we submitted an IEEE VIS paper entitled \u201cSwaying the Public? Impacts of Election Forecast Visualizations on Emotion, Trust, and Intention in the 2022 U.S. Midterms\u201d [68], which was later bestowed with the honor of a best paper award. Yet, studying such a complex phenomenon required us to explore many more design paths than we could count, and certainly more than we could document in a single paper. This paper, then, is the unwritten prequel\u2014the backstory. It chronicles our journey from a simple idea\u2014to study visualizations for election forecasts\u2014through obstacles such as developing meaningfully different, easy-to-understand forecast visualizations, crafting professional-looking forecasts, and grappling with how to study perceptions of the forecasts before, during, and after the 2022 U.S. midterm elections. Our backstory began with developing a design space for two-party election forecasts, de\ufb01ning dimensions such as data transformations, visual channels, layouts, and types of animated narratives. We then qualitatively evaluated ten representative prototypes in this design space through interviews with 13 participants. The interviews yielded invaluable insights into how people interpret uncertainty visualizations and reason about probability in a U.S. election context, such as confounding win probability with vote share and erroneously forming connections between concrete visual representations (like dots) and real-world entities (like votes). Informed by these insights, we revised our prototypes to address ambiguity in interpreting visual encodings, particularly through the inclusion of extensive annotations. As we navigated these design paths, we contributed a design space and insights that may help others when designing uncertainty visualizations. We also hope that our design lessons and research process can inspire the research community when exploring topics related to designing visualizations for the general public.","authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"fumeng.p.yang@gmail.com","is_corresponding":true,"name":"Fumeng Yang"},{"affiliations":["Northwestern University, Evanston, United States","Northwestern University, Evanston, United States"],"email":"mandicai2028@u.northwestern.edu","is_corresponding":false,"name":"Mandi Cai"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"chloemortenson2026@u.northwestern.edu","is_corresponding":false,"name":"Chloe Rose Mortenson"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"hoda@u.northwestern.edu","is_corresponding":false,"name":"Hoda Fakhari"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"aysedlokmanoglu@gmail.com","is_corresponding":false,"name":"Ayse Deniz Lokmanoglu"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"nicholas.diakopoulos@gmail.com","is_corresponding":false,"name":"Nicholas Diakopoulos"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"erik.nisbet@northwestern.edu","is_corresponding":false,"name":"Erik Nisbet"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1488","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"The Backstory to \u201cSwaying the Public\u201d: A Design Chronicle of Election Forecast Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1489","abstract":"Projecting high-dimensional vectors into two dimensions for visualization, known as embedding visualization, facilitates perceptual reasoning and interpretation. Comparison of multiple embedding visualizations drives decision-making in many domains, but conventional comparison methods are limited by a reliance on direct point correspondences. This requirement precludes embedding comparisons without point correspondences, such as two different datasets of annotated images, and fails to capture meaningful higher-level relationships among point groups. To address these shortcomings, we propose a general framework to compare embedding visualizations based on shared class labels rather than individual points. Our approach partitions points into regions corresponding to three key class concepts---confusion, neighborhood, and relative size---to characterize intra- and inter-class relationships. Informed by a preliminary user study, we realize an implementation of our framework using perceptual neighborhood graphs to define these regions and introduce metrics to quantify each concept. We demonstrate the generality of our framework with use cases from machine learning and single-cell biology, highlighting our metrics' ability to surface insightful comparisons across label hierarchies. To assess the effectiveness of our approach, we conducted a user study with five machine learning researchers and six single-cell biologists using an interactive and scalable prototype developed in Python and Rust. Our metrics enable more structured comparison through visual guidance and increased participants\u2019 confidence in their findings.","authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"trevor_manz@g.harvard.edu","is_corresponding":true,"name":"Trevor Manz"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"f.lekschas@gmail.com","is_corresponding":false,"name":"Fritz Lekschas"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"palmergreene@gmail.com","is_corresponding":false,"name":"Evan Greene"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"greg@ozette.com","is_corresponding":false,"name":"Greg Finak"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1489","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1494","abstract":"Topological abstractions offer a method to summarize the behavior of vector fields, but computing them robustly can be challenging due to numerical precision issues. One alternative is to represent the vector field using a discrete approach, which constructs a collection of pairs of simplices in the input mesh that satisfies criteria introduced by Forman\u2019s discrete Morse theory. While numerous approaches exist to compute pairs in the restricted case of the gradient of a scalar field, state-of-the-art algorithms for the general case of vector fields require expensive optimization procedures. This paper introduces a fast, novel approach for pairing simplices of two-dimensional, triangulated vector fields that do not vary in time. The key insight of our approach is that we can employ a local evaluation, inspired by the approach used to construct a discrete gradient field, where every cell in a mesh is considered by no more than one of its vertices. Specifically, we observe that for any edge in the input mesh, we can uniquely assign an outward direction of flow. We can further expand this consistent notion of outward flow at each vertex, which corresponds to the concept of a downhill flow in the case of scalar fields. Working with outward flow enables a linear-time algorithm that processes the (outward) neighborhoods of each vertex one-by-one, similar to the approach used for scalar fields. We couple our approach to constructing discrete vector fields with a method to extract, simplify, and visualize topological features. Empirical results on analytic and simulation data demonstrate drastic improvements in running time, produce features similar to the current state-of-the-art, and show the application of simplification to large, complex flows.","authors":[{"affiliations":["University of Arizona, Tucson, United States"],"email":"finkent@arizona.edu","is_corresponding":true,"name":"Tanner Finken"},{"affiliations":["Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"josh@cs.arizona.edu","is_corresponding":false,"name":"Joshua A Levine"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1494","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Localized Evaluation for Constructing Discrete Vector Fields","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1500","abstract":"Haptic feedback provides an essential sensory stimulus crucial for interacting and analyzing three-dimensional spatio-temporal phenomena on surface visualizations. Given its ability to provide enhanced spatial perception and scene maneuverability, virtual reality (VR) catalyzes haptic interactions on surface visualizations. Various interaction modes, encompassing both mid-air and on-surface interactions---with or without the application of assisting force stimuli---have been explored using haptic force feedback devices. In this paper, we evaluate the use of on-surface and assisted on-surface haptic modes of interaction compared to a no-haptic interaction mode. A force-based haptic stylus is used for all three modalities; the on-surface mode uses collision based forces, whereas the assisted on-surface mode is accompanied by an additional snapping force. We conducted a within-subjects user study involving fundamental interaction tasks performed on surface visualizations. Keeping a consistent visual design across all three modes, our study incorporates tasks that require the localization of the highest, lowest, and random points on surfaces; and tasks that focus on brushing curves on surfaces with varying complexity and occlusion levels. Our findings show that participants took almost the same time to brush curves using all the interaction modes. They could draw smoother curves using the on-surface interaction modes compared to the no-haptic mode. However, the assisted on-surface mode provided better accuracy than the on-surface mode. The on-surface mode was slower in point localization, but the accuracy depended on the visual cues and occlusions associated with the tasks. Finally, we discuss participant feedback on using haptic force feedback as a tangible input modality and share takeaways to aid the design of haptics-based tangible interactions for surface visualizations.","authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"hamza.afzaal@ucalgary.ca","is_corresponding":true,"name":"Hamza Afzaal"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"ualim@ucalgary.ca","is_corresponding":false,"name":"Usman Alim"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1500","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1502","abstract":"Visualization is widely used for exploring personal data, but many visualization authoring systems do not support expressing data in flexible, personal, and organic layouts. Sketching is an accessible tool for experimenting with visualization designs, but formalizing sketched elements into structured data representations is difficult, as modifying hand-drawn glyphs to encode data when available is labour-intensive and error prone. We propose an approach where authors structure their own expressive templates, capturing implicit style as well as explicit data mappings, through sketching a representative visualization for an envisioned or partial dataset. Our approach seeks to support freeform exploration and partial specification, balanced against interactive machine support for specifying the generative procedural rules. We implement this approach in DataGarden, a system designed to support hierarchical data visualizations, and evaluate it with 12 participants in a reproduction study and four experts in a freeform creative task. Participants readily picked up the core idea of template authoring, and the variety of workflows we observed highlight how this process serves design and data ideation as well as visual constraint iteration. We discuss challenges in implementing the design considerations underpinning DataGarden, and illustrate its potential in a gallery of visualizations generated from authored templates.","authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, Orsay, France"],"email":"anna.offenwanger@gmail.com","is_corresponding":true,"name":"Anna Offenwanger"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Inria, LISN, Orsay, France"],"email":"theophanis.tsandilas@inria.fr","is_corresponding":false,"name":"Theophanis Tsandilas"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":false,"name":"Fanny Chevalier"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1502","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DataGarden: Formalizing Personal Sketches into Structured Visualization Templates","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1503","abstract":"The increasing reliance on Large Language Models (LLMs) for health information seeking can pose severe risks due to the potential for misinformation and the complexity of these topics. This paper introduces KnowNet, a visualization system that integrates LLMs with Knowledge Graphs (KG) to provide enhanced accuracy and structured exploration. One core idea in KnowNet is to conceptualize the understanding of a subject as the gradual construction of graph visualization, aligning the user's cognitive process with both the structured data in KGs and the unstructured outputs from LLMs. Specifically, we extracted triples (e.g., entities and their relations) from LLM outputs and mapped them into the validated information and supported evidence in external KGs. Based on the neighborhood of the currently explored entities in KGs, KnowNet provides recommendations for further inquiry, aiming to guide a comprehensive understanding without overlooking critical aspects. A progressive graph visualization is proposed to show the alignment between LLMs and KGs, track previous inquiries, and connect this history with current queries and next-step recommendations. We demonstrate the effectiveness of our system via use cases and expert interviews.","authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"yan00111@umn.edu","is_corresponding":false,"name":"Youfu Yan"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"hou00127@umn.edu","is_corresponding":false,"name":"Yu Hou"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"xiao0290@umn.edu","is_corresponding":false,"name":"Yongkang Xiao"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"zhan1386@umn.edu","is_corresponding":false,"name":"Rui Zhang"},{"affiliations":["University of Minnesota, Minneapolis , United States"],"email":"qianwen@umn.edu","is_corresponding":true,"name":"Qianwen Wang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1503","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1504","abstract":"A wide range of visualization authoring interfaces enable the creation of highly customized visualizations. However, prioritizing expressiveness often impedes the learnability of the authoring interface. The diversity of users, such as varying computational skills and prior experiences in user interfaces, makes it even more challenging for a single authoring interface to satisfy the needs of a broad audience. In this paper, we introduce a framework to balance learnability and expressivity in a visualization authoring system. Adopting insights from learnability studies, such as multimodal interaction and visualization literacy, we explore the design space of blending multiple visualization authoring interfaces for supporting authoring tasks in a complementary and flexible manner. To evaluate the effectiveness of blending interfaces, we implemented a proof-of-concept system, Blace, that combines four common visualization authoring interfaces\u2014template-based, shelf configuration, natural language, and code editor\u2014that are tightly linked to one another to help users easily relate unfamiliar interfaces to more familiar ones. Using the system, we conducted a user study with 12 domain experts who regularly visualize genomics data as part of their analysis workflow. Participants with varied visualization and programming backgrounds were able to successfully reproduce complex visualization examples without a guided tutorial in the study. Feedback from a post-study qualitative questionnaire further suggests that blending interfaces enabled participants to learn the system easily and assisted them in confidently editing unfamiliar visualization grammar in the code editor, enabling expressive customization. Reflecting on our study results and the design of our system, we discuss the different interaction patterns that we identified and design implications for blending visualization authoring interfaces.","authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"sehi_lyi@hms.harvard.edu","is_corresponding":true,"name":"Sehi L'Yi"},{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.v.d.brandt@tue.nl","is_corresponding":false,"name":"Astrid van den Brandt"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"etowah_adams@hms.harvard.edu","is_corresponding":false,"name":"Etowah Adams"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"huyen_nguyen@hms.harvard.edu","is_corresponding":false,"name":"Huyen N. Nguyen"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1504","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Learnable and Expressive Visualization Authoring Through Blended Interfaces","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1522","abstract":"Despite the recent surge of research efforts to make data visualizations accessible to people who are blind or have low-vision (BLV), how to support BLV people's data analysis remains an important and challenging question. As refreshable tactile displays (RTDs) become cheaper and conversational agents continue to improve, their combination provides a promising approach to support BLV people's interactive data analysis. To understand how BLV people would use and react to a system combining an RTD with a conversational agent, we conducted a Wizard-of-Oz study with 11 BLV participants involving line graphs, bar charts, and isarithmic maps. From an analysis of participant interactions, we identified nine distinct patterns and learned that the choice of modalities depended on the type of task and prior experience with tactile graphics. We also found that participants strongly preferred the combination of RTD and speech to a single modality, and that participants with more tactile experience described how tactile images facilitated deeper engagement with the data and supported independent interpretation. Our findings will inform the design of interfaces for such interactive mixed-modality systems.","authors":[{"affiliations":["Monash University, Melbourne, Australia"],"email":"samuel.reinders@monash.edu","is_corresponding":true,"name":"Samuel Reinders"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"matthew.butler@monash.edu","is_corresponding":false,"name":"Matthew Butler"},{"affiliations":["Monash University, Clayton, Australia"],"email":"ingrid.zukerman@monash.edu","is_corresponding":false,"name":"Ingrid Zukerman"},{"affiliations":["Yonsei University, Seoul, Korea, Republic of","Microsoft Research, Redmond, United States"],"email":"b.lee@yonsei.ac.kr","is_corresponding":false,"name":"Bongshin Lee"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"lizhen.qu@monash.edu","is_corresponding":false,"name":"Lizhen Qu"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"kim.marriott@monash.edu","is_corresponding":false,"name":"Kim Marriott"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1522","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1533","abstract":"We introduce DiffFit, a differentiable algorithm for fitting protein atomistic structures into experimental reconstructed Cryo-Electron Microscopy (cryo-EM) volume map. This process is essential in structural biology to semi-automatically reconstruct large meso-scale models of complex protein assemblies and complete cellular structures that are based on measured cryo-EM data. Current approaches require manual fitting in 3D that already results in approximately aligned structures followed by an automated fine-tuning of the alignment. With our DiffFit approach, we enable domain scientists to automatically fit new structures and visualize the fitting results for inspection and interactive revision. Our fitting begins with differentiable 3D rigid transformations of the protein atom coordinates, followed by sampling the density values at its atom coordinates from the target cryo-EM volume. To ensure a meaningful correlation between the sampled densities and the protein structure, we propose a novel loss function based on a multi-resolution volume-array approach and the exploitation of the negative space. Such loss function serves as a critical metric for assessing the fitting quality, ensuring both fitting accuracy and improved visualization of the results. We assessed the placement quality of DiffFit with several large, realistic datasets and found its quality to be superior to that of previous methods. We further evaluated our method in two use cases. First, we demonstrate its use in the process of automating the integration of known composite structures into larger protein complexes. Second, we show that it facilitates the fitting of predicted protein domains into volume densities to aid researchers in the identification of unknown proteins. We implemented our algorithm as an open-source plugin (github.com/nanovis/DiffFitViewer) in ChimeraX, a leading visualization software in the field. All supplemental materials are available at osf.io/5tx4q.","authors":[{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"deng.luo@kaust.edu.sa","is_corresponding":true,"name":"Deng Luo"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"zainab.alsuwaykit@kaust.edu.sa","is_corresponding":false,"name":"Zainab Alsuwaykit"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"dawar.khan@kaust.edu.sa","is_corresponding":false,"name":"Dawar Khan"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"ondrej.strnad@kaust.edu.sa","is_corresponding":false,"name":"Ond\u0159ej Strnad"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"ivan.viola@kaust.edu.sa","is_corresponding":false,"name":"Ivan Viola"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1533","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1544","abstract":"Large Language Models (LLMs) have been successfully adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways from visualizations? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as the spatial arrangement. In this work, we examine how well LLMs can predict such design choice sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We test four common chart arrangements: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked, through three experimental phases. In Phase I, we identified the optimal configuration of LLMs to generate meaningful chart takeaways, across three LLM models (GPT3.5, GPT4, GPT4V, and Gemini 1.0 Pro), two temperature settings (0, 0.7), four chart specifications (Vega-Lite, Matplotlib, ggplot2, and scene graphs), and several prompting strategies. We found that even state-of-the-art LLMs can struggle to generate factually accurate takeaways. In Phase 2, using the most optimal LLM configuration, we generated 30 chart takeaways across the four arrangements of bar charts using two datasets, with both zero-shot and one-shot settings. Compared to data on human takeaways from prior work, we found that the takeaways LLMs generate often do not align with human comparisons. In Phase 3, we examined the effect of the charts\u2019 underlying data values on takeaway alignment between humans and LLMs, and found both matches and mismatches. Overall, our work evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human-aligned chart takeaways.","authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"wwill@cs.washington.edu","is_corresponding":true,"name":"Huichen Will Wang"},{"affiliations":["Adobe Research, Seattle, United States"],"email":"jhoffs@adobe.com","is_corresponding":false,"name":"Jane Hoffswell"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"yukithane@gmail.com","is_corresponding":false,"name":"Sao Myat Thazin Thane"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"victorbursztyn2022@u.northwestern.edu","is_corresponding":false,"name":"Victor S. Bursztyn"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"cxiong@gatech.edu","is_corresponding":false,"name":"Cindy Xiong Bearfield"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1544","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1547","abstract":"Visual validation of regression models in scatterplots is a common practice for assessing model quality, yet its efficacy remains unquantified. We conducted two empirical experiments to investigate individuals' ability to visually validate linear regression models (linear trends) and to examine the impact of common visualization designs on validation quality. The first experiment showed that the level of accuracy for visual estimation of slope (i.e., fitting a line to data) is higher than for visual validation of slope (i.e., accepting a shown line). Notably, we found bias toward slopes that are ''too steep'' in both cases. This lead to novel insights that participants naturally assessed regression with orthogonal distances between the points and the line (i.e., ODR regression) rather than the common vertical distances (OLS regression). In the second experiment, we investigated whether incorporating common designs for regression visualization (error lines, bounding boxes, and confidence intervals) would improve visual validation. Even though error lines reduced validation bias, results failed to show the desired improvements in accuracy for any design. Overall, our findings suggest caution in using visual model validation for linear trends in scatterplots.","authors":[{"affiliations":["University of Cologne, Cologne, Germany"],"email":"braun@cs.uni-koeln.de","is_corresponding":true,"name":"Daniel Braun"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"},{"affiliations":["University of Wisconsin - Madison, Madison, United States"],"email":"gleicher@cs.wisc.edu","is_corresponding":false,"name":"Michael Gleicher"},{"affiliations":["University of Cologne, Cologne, Germany"],"email":"landesberger@cs.uni-koeln.de","is_corresponding":false,"name":"Tatiana von Landesberger"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1547","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1568","abstract":"Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns in dimensionality reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections.","authors":[{"affiliations":["Tufts University, Medford, United States"],"email":"brianmontambault@gmail.com","is_corresponding":true,"name":"Brian Montambault"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["Tufts University, Boston, United States"],"email":"jen@cs.tufts.edu","is_corresponding":false,"name":"Jen Rogers"},{"affiliations":["Tufts University, Medford, United States"],"email":"camelia_daniela.brumar@tufts.edu","is_corresponding":false,"name":"Camelia D. Brumar"},{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"mingwei.li@tufts.edu","is_corresponding":false,"name":"Mingwei Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1568","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1571","abstract":"Effective security patrol management is critical for ensuring safety in diverse environments such as art galleries, airports, and factories. The behavior of patrols in these situations can be modeled by patrolling games. They simulate the behavior of the patrol and adversary in the building, which is modeled as a graph of interconnected nodes representing rooms. The designers of algorithms solving the game face the problem of analyzing complex graph layouts with temporal dependencies. Therefore, appropriate visual support is crucial for them to work effectively. In this paper, we present a novel tool that helps the designers of patrolling games explore the outcomes of the proposed algorithms and approaches, evaluate their success rate, and propose modifications that can improve their solutions. Our tool offers an intuitive and interactive interface, featuring a detailed exploration of patrol routes and probabilities of taking them, simulation of patrols, and other requested features. In close collaboration with experts in designing patrolling games, we conducted three case studies demonstrating the usage and usefulness of our tool. The prototype of the tool, along with exemplary datasets, is available at https://gitlab.fi.muni.cz/formela/strategy-vizualizer.","authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"langm@mail.muni.cz","is_corresponding":true,"name":"Mat\u011bj Lang"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"469242@mail.muni.cz","is_corresponding":false,"name":"Adam \u0160t\u011bp\u00e1nek"},{"affiliations":["Faculty of Informatics, Masaryk University, Brno, Czech Republic"],"email":"514179@mail.muni.cz","is_corresponding":false,"name":"R\u00f3bert Zvara"},{"affiliations":["Faculty of Informatics, Masaryk University, Brno, Czech Republic"],"email":"rehak@fi.muni.cz","is_corresponding":false,"name":"Vojt\u011bch \u0158eh\u00e1k"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1571","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Who Let the Guards Out: Visual Support for Patrolling Games","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1574","abstract":"The numerical extraction of vortex cores from time-dependent fluid flow attracted much attention over the past decades. A commonly agreed upon vortex definition remained elusive since a proper vortex core needs to satisfy two hard constraints: it must be objective and Lagrangian. Recent methods on objectivization met the first but not the second constraint, since there was no formal guarantee that the resulting vortex coreline is indeed a pathline of the fluid flow. In this paper, we propose the first vortex core definition that is both objective and Lagrangian. Our approach restricts observer motions to follow along pathlines, which reduces the degrees of freedoms: we only need to optimize for an observer rotation that makes the observed flow as steady as possible. This optimization succeeds along Lagrangian vortex corelines and will result in a non-zero time-partial everywhere else. By performing this optimization at each point of a spatial grid, we obtain a residual scalar field, which we call vortex deviation error. The local minima on the grid serve as seed points for a gradient descent optimization that delivers sub-voxel accurate corelines. The visualization of both 2D and 3D vortex cores is based on the separation of the movement of the vortex core and the swirling flow behavior around it. While the vortex core is represented by a pathline, the swirling motion around it is visualized by streamlines in the correct frame. We demonstrate the utility of the approach on several 2D and 3D time-dependent vector fields.","authors":[{"affiliations":["Friedrich-Alexander-University Erlangen-N\u00fcrnberg, Erlangen, Germany"],"email":"tobias.guenther@fau.de","is_corresponding":true,"name":"Tobias G\u00fcnther"},{"affiliations":["University of Magdeburg, Magdeburg, Germany"],"email":"theisel@ovgu.de","is_corresponding":false,"name":"Holger Theisel"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1574","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Objective Lagrangian Vortex Cores and their Visual Representations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1594","abstract":"The visualization community has a rich history of reflecting upon visualization design flaws. Although research in this area has remained lively, we believe it is essential to continuously revisit this classic and critical topic in visualization research by incorporating more empirical evidence from diverse sources, characterizing new design flaws, building more systematic theoretical frameworks, and understanding the underlying reasons for these flaws. To address the above gaps, this work investigated visualization design flaws through the lens of the public, constructed a framework to summarize and categorize the identified flaws, and explored why these flaws occur. Specifically, we analyzed 2227 flawed data visualizations collected from an online gallery and derived a design task-associated taxonomy containing 76 specific design flaws. These flaws were further classified into three high-level categories (i.e., misinformation, uninformativeness, unsociability) and ten subcategories (e.g., inaccuracy, unfairness, ambiguity). Next, we organized five focus groups to explore why these design flaws occur and identified seven causes of the flaws. Finally, we proposed a research agenda for combating visualization design flaws and summarize nine research opportunities.","authors":[{"affiliations":["Fudan University, Shanghai, China","Fudan University, Shanghai, China"],"email":"xingyulan96@gmail.com","is_corresponding":true,"name":"Xingyu Lan"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom","University of Edinburgh, Edinburgh, United Kingdom"],"email":"coraline.liu.dataviz@gmail.com","is_corresponding":false,"name":"Yu Liu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1594","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"I Came Across a Junk: Understanding Design Flaws of Data Visualization from the Public's Perspective","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1595","abstract":"Assigning discriminable and harmonic colors to samples according to their class labels and spatial distribution can generate attractive visualizations and facilitate data exploration. However, as the number of classes increases, it is challenging to generate a high-quality color assignment result that accommodates all classes simultaneously. A practical solution is to organize classes into a hierarchy and then dynamically assign colors during exploration. However, existing color assignment methods fall short in generating high-quality color assignment results and dynamically aligning them with hierarchical structures. To address this issue, we develop a dynamic color assignment method for hierarchical data, which is formulated as a multi-objective optimization problem. This method simultaneously considers color discriminability, color harmony, and spatial distribution at each hierarchical level. By using the colors of parent classes to guide the color assignment of their child classes, our method further promotes both consistency and clarity across hierarchical levels. We demonstrate the effectiveness of our method in generating dynamic color assignment results with quantitative experiments and a user study.","authors":[{"affiliations":["Tsinghua University, Beijing, China"],"email":"jiashu0717c@gmail.com","is_corresponding":true,"name":"Jiashu Chen"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"vicayang496@gmail.com","is_corresponding":false,"name":"Weikai Yang"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"jiazl22@mails.tsinghua.edu.cn","is_corresponding":false,"name":"Zelin Jia"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"tarolancy@gmail.com","is_corresponding":false,"name":"Lanxi Xiao"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"shixia@tsinghua.edu.cn","is_corresponding":false,"name":"Shixia Liu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1595","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Dynamic Color Assignment for Hierarchical Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1597","abstract":"In understanding and redesigning the function of proteins in modern biochemistry, protein engineers are increasingly focusing on exploring regions in proteins called loops. Analyzing various characteristics of these regions helps the experts design the transfer of the desired function from one protein to another. This process is denoted as loop grafting. We designed a set of interactive visualizations that provide experts with visual support through all the loop grafting pipeline steps. The workflow is divided into several phases, reflecting the steps of the pipeline. Each phase is supported by a specific set of abstracted 2D visual representations of proteins and their loops that are interactively linked with the 3D View of proteins. By sequentially passing through the individual phases, the user shapes the list of loops that are potential candidates for loop grafting. Finally, the actual in-silico insertion of the loop candidates from one protein to the other is performed, and the results are visually presented to the user. In this way, the fully computational rational design of proteins and their loops results in newly designed protein structures that can be further assembled and tested through in-vitro experiments. We showcase the contribution of our visual support design on a real case scenario changing the enantiomer selectivity of the engineered enzyme. Moreover, we provide the readers with the experts' feedback.","authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kiraa@mail.muni.cz","is_corresponding":false,"name":"Filip Op\u00e1len\u00fd"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"paloulbrich@gmail.com","is_corresponding":false,"name":"Pavol Ulbrich"},{"affiliations":["Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital, Brno, Czech Republic"],"email":"joan.planas@mail.muni.cz","is_corresponding":false,"name":"Joan Planas-Iglesias"},{"affiliations":["Masaryk University, Brno, Czech Republic","University of Bergen, Bergen, Norway"],"email":"xbyska@fi.muni.cz","is_corresponding":false,"name":"Jan By\u0161ka"},{"affiliations":["Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital, Brno, Czech Republic"],"email":"stourac.jan@gmail.com","is_corresponding":false,"name":"Jan \u0160toura\u010d"},{"affiliations":["Faculty of Science, Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital Brno, Brno, Czech Republic"],"email":"222755@mail.muni.cz","is_corresponding":false,"name":"David Bedn\u00e1\u0159"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"katarina.furmanova@gmail.com","is_corresponding":true,"name":"Katar\u00edna Furmanov\u00e1"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1597","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Visual Support for the Loop Grafting Workflow on Proteins","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1599","abstract":"Existing deep learning-based surrogate models facilitate efficient data generation, but fall short in uncertainty quantification, efficient parameter space exploration, and reverse prediction. In our work, we introduce SurroFlow, a novel normalizing flow-based surrogate model, to learn the invertible transformation between simulation parameters and simulation outputs. The model not only allows accurate predictions of simulation outcomes for a given simulation parameter but also supports uncertainty quantification in the data generation process. Additionally, it enables efficient simulation parameter recommendation and exploration. We integrate SurroFlow and a genetic algorithm as the backend of a visual interface to support effective user-guided ensemble simulation exploration and visualization. Our framework significantly reduces the computational costs while enhancing the reliability and exploration capabilities of scientific surrogate models.","authors":[{"affiliations":["The Ohio State University, Columbus, United States","The Ohio State University, Columbus, United States"],"email":"shen.1250@osu.edu","is_corresponding":true,"name":"JINGYI SHEN"},{"affiliations":["The Ohio State University, Columbus, United States","The Ohio State University, Columbus, United States"],"email":"duan.418@osu.edu","is_corresponding":false,"name":"Yuhan Duan"},{"affiliations":["The Ohio State University , Columbus , United States","The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1599","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1603","abstract":"Multi-modal embeddings form the foundation for vision-language models, such as CLIP embeddings, the most widely used text-image embeddings. However, these embeddings are hard to interpret and vulnerable to subtle misalignment of cross-modal features, resulting in decreased model performance and diminished generalization. To address this problem, we design ModalChorus, an interactive system for visual probing and alignment of multi-modal embeddings. ModalChorus primarily offers a two-stage process: 1) embedding probing with Modal Fusion Map (MFM), a novel parametric dimensionality reduction method that integrates both metric and nonmetric objectives to enhance modality fusion; and 2) embedding alignment that allows users to interactively articulate intentions for both point-set and set-set alignments. Quantitative and qualitative comparisons for CLIP embeddings with existing dimensionality reduction (e.g., t-SNE and MDS) and data fusion (e.g., data context map) methods demonstrate the advantages of MFM in showcasing cross-modal features over common vision-language datasets. Case studies reveal that ModalChorus can facilitate intuitive discovery of misalignment and efficient re-alignment in scenarios ranging from zero-shot classification to cross-modal retrieval and generation.","authors":[{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"yyebd@connect.ust.hk","is_corresponding":true,"name":"Yilin Ye"},{"affiliations":["The Hong Kong University of Science and Technology(Guangzhou), Guangzhou, China"],"email":"sxiao713@connect.hkust-gz.edu.cn","is_corresponding":false,"name":"Shishi Xiao"},{"affiliations":["the Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"xingchen.zeng@outlook.com","is_corresponding":false,"name":"Xingchen Zeng"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China","The Hong Kong University of Science and Technology, Hong Kong SAR, China"],"email":"weizeng@hkust-gz.edu.cn","is_corresponding":false,"name":"Wei Zeng"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1603","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1606","abstract":"With the increase of graph size, it becomes difficult or even impossible to visualize graph structures clearly within the limited screen space. Consequently, it is crucial to design effective visual representations for large graphs. In this paper, we propose AdaMotif, a novel approach that can capture the essential structure patterns of large graphs and effectively reveal the overall structures via adaptive motif designs. Specifically, our approach involves partitioning a given large graph into multiple subgraphs, then clustering similar subgraphs and extracting similar structural information within each cluster. Subsequently, adaptive motifs representing each cluster are generated and utilized to replace the corresponding subgraphs, leading to a simplified visualization. Our approach aims to preserve as much information from the subgraphs as possible, effectively simplifying graphs while minimizing information loss. Notably, our approach successfully visualizes crucial community information within a large graph. We conduct case studies and a user study using both synthetic and real-world graphs to validate the effectiveness of our proposed approach. The results demonstrate the capability of our approach in simplifying graphs while retaining important structural and community information.","authors":[{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"hzhou@szu.edu.cn","is_corresponding":true,"name":"Hong Zhou"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"laipeifeng1111@gmail.com","is_corresponding":false,"name":"Peifeng Lai"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"zhida.sun@connect.ust.hk","is_corresponding":false,"name":"Zhida Sun"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"2310274034@email.szu.edu.cn","is_corresponding":false,"name":"Xiangyuan Chen"},{"affiliations":["Shenzhen University, Shen Zhen, China"],"email":"275621136@qq.com","is_corresponding":false,"name":"Yang Chen"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"hswu@szu.edu.cn","is_corresponding":false,"name":"Huisi Wu"},{"affiliations":["Nanyang Technological University, Singapore, Singapore"],"email":"yong-wang@ntu.edu.sg","is_corresponding":false,"name":"Yong WANG"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1606","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"AdaMotif: Graph Simplification via Adaptive Motif Design","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1612","abstract":"Partitionings (or segmentations) divide a given domain into disjoint connected regions whose union forms again the entire domain. Multi-dimensional partitionings occur, for example, when analyzing parameter spaces of simulation models, where each segment of the partitioning represents a region of similar model behavior. Having computed a partitioning, one is commonly interested in understanding how large the segments are and which segments lie next to each other. While visual representations of 2D domain partitionings that reveal sizes and neighborhoods are straightforward, this is no longer the case when considering multi-dimensional domains of three or more dimensions. We propose an algorithm for computing 2D embeddings of multi-dimensional partitionings. The embedding shall have the following properties: It shall maintain the topology of the partitioning and optimize the area sizes and joint boundary lengths of the embedded segments to match the respective sizes and lengths in the multi-dimensional domain. We demonstrate the effectiveness of our approach by applying it to different use cases, including the visual exploration of 3D spatial domain segmentations and multi-dimensional parameter space partitionings of simulation ensembles. We numerically evaluate our algorithm with respect to how well sizes and lengths are preserved depending on the dimensionality of the domain and the number of segments.","authors":[{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"m_ever14@uni-muenster.de","is_corresponding":true,"name":"Marina Evers"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"linsen@uni-muenster.de","is_corresponding":false,"name":"Lars Linsen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1612","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"2D Embeddings of Multi-dimensional Partitionings","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1613","abstract":"We present a path-based design model and system for designing and creating visualisations. Our model represents a systematic approach to constructing visual representations of data or concepts following a predefined sequence of steps. The initial step involves outlining the overall appearance of the visualisation by creating a skeleton structure, referred to as a flowpath. Subsequently, we specify objects, visual marks, properties, and appearance, storing them in a gene. Lastly, we map data onto the flowpath, ensuring suitable morphisms. Alternative designs are created by exchanging values in the gene. For example, designs that share similar traits, are created by making small incremental changes to the gene. Our design method develops a wide variety of creative ideas, space-filling visualisations, and traditional designs (bar chart, pie chart etc.) Our implementation, demonstrates the model, and we apply the output visualisations onto a smart-watch and on visualisation dashboards. In this article we (1) introduce, define and explain the path model and discuss possibilities for its use, (2) present our implementation, results, and evaluation, and (3) demonstrate and evaluate an application of its use on a mobile watch.","authors":[{"affiliations":["ExaDev, Gaerwen, United Kingdom","Bangor University, Bangor, United Kingdom"],"email":"james.ogge@gmail.com","is_corresponding":false,"name":"James R Jackson"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.ritsos@bangor.ac.uk","is_corresponding":false,"name":"Panagiotis D. Ritsos"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.butcher@bangor.ac.uk","is_corresponding":false,"name":"Peter W. S. Butcher"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"j.c.roberts@bangor.ac.uk","is_corresponding":true,"name":"Jonathan C Roberts"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1613","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Path-based Design Model for Constructing and Exploring Alternative Visualisations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1615","abstract":"We present Cell2Cell, a novel visual analytics approach for quantifying and visualizing networks of cell-cell interactions in three-dimensional (3D) multi-channel cancerous tissue data. By analyzing cellular interactions, biomedical domain experts can gain a more accurate understanding of the intricate relationships between cancer and immune cells. Recent methods have focused on inferring interaction based on the proximity of cells in low-resolution 2D multi-channel imaging data. By contrast, we analyze cell interactions by quantifying the intensities of protein expressions extracted from high-resolution 3D multi-channel volume data. Such analyses have a strong exploratory nature and require a tight integration of domain experts in the analysis loop to leverage their deep knowledge. We propose two complementary semi-automated approaches to cope with the increasing size and complexity of the data in an interactive fashion: On the one hand, we interpret cell-to-cell interactions as edges in a cell graph and analyze the image signal (protein expressions) along those edges, using spatial as well as abstract data visualizations. Complementary, we propose a cell-centered approach, enabling scientists to visually analyze polarized distributions of proteins in three dimensions, which also captures neighboring cells with biochemical and cell biological consequences. We evaluate our application in two case studies, where computational biologists and medical experts use \\tool to investigate tumor micro-environments to identify and quantify T-cell activation in human tissue data. We confirmed that our tool can fully solve both use cases and enables a streamlined and detailed analysis of cell-cell interactions.","authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"eric.moerth@gmx.at","is_corresponding":true,"name":"Eric M\u00f6rth"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"kevin.sidak@univie.ac.at","is_corresponding":false,"name":"Kevin Sidak"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"zoltan_maliga@hms.harvard.edu","is_corresponding":false,"name":"Zoltan Maliga"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"torsten.moeller@univie.ac.at","is_corresponding":false,"name":"Torsten M\u00f6ller"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"peter_sorger@hms.harvard.edu","is_corresponding":false,"name":"Peter Sorger"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"pfister@seas.harvard.edu","is_corresponding":false,"name":"Hanspeter Pfister"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"jbeyer@g.harvard.edu","is_corresponding":false,"name":"Johanna Beyer"},{"affiliations":["New York University, New York, United States","Harvard University, Boston, United States"],"email":"rk4815@nyu.edu","is_corresponding":false,"name":"Robert Kr\u00fcger"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1615","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1626","abstract":"We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures---often at multiple spatial or semantic scales---across various application domains and requiring diverse visual representations for effective visualization. To understand user reactions to our new environment, we began with an elicitation user study, in which we captured their responses and interactions. We observed that users adapted their interaction approaches based on perceived visual representations, with natural transitions in spatial awareness and actions while navigating across the physical surface. Our findings then informed the development of a design space for spatial data exploration in cross-reality. We thus developed cross-reality environments tailored to three distinct domains: for 3D molecular structure data, for 3D point cloud data, and for 3D anatomical data. In particular, we designed interaction techniques that account for the inherent features of interactions in both spaces, facilitating various forms of interaction including mid-air gestures, touch interactions, pen interactions, and combinations thereof to enhance the users' sense of presence and engagement. We assessed the usability of our environment with biologists, focusing on its use for domain research. In addition, we evaluated our interaction transition designs with virtual and mixed-reality experts to gather further insights. As a result, we provide our design suggestions for the cross-reality environment, emphasizing the interaction with diverse visual representations and seamless interaction transitions between 2D and 3D spaces.","authors":[{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"lixiang.zhao17@student.xjtlu.edu.cn","is_corresponding":false,"name":"Lixiang Zhao"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"fuqi.xie20@student.xjtlu.edu.cn","is_corresponding":false,"name":"Fuqi Xie"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"hainingliang@hkust-gz.edu.cn","is_corresponding":false,"name":"Hai-Ning Liang"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"lingyun.yu@xjtlu.edu.cn","is_corresponding":true,"name":"Lingyun Yu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1626","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1632","abstract":"High-dimensional data, characterized by many features, can be difficult to visualize effectively. Dimensionality reduction techniques, such as PCA, UMAP, and t-SNE, address this challenge by projecting the data into a lower-dimensional space while preserving important relationships. TopoMap is another technique that excels at preserving the underlying structure of the data, leading to interpretable visualizations. In particular, TopoMap maps the high-dimensional data into a visual space, guaranteeing that the 0-dimensional persistence diagram of the Rips filtration of the visual space matches the one from the high-dimensional data. However, the original Topomap algorithm can be slow and its layout can be too sparse for large and complex datasets. In this paper, we propose three improvements to TopoMap: 1) a more space-efficient layout, 2) a significantly faster implementation, and 3) a novel treemap-based representation to aid the exploration of the projections. These advancements make TopoMap, now referred to as TopoMap++, a more powerful tool for visualizing high-dimensional data, similar to how t-SNE surpassed SNE in popularity.","authors":[{"affiliations":["New York University, New York City, United States"],"email":"vitoriaguardieiro@gmail.com","is_corresponding":true,"name":"Vitoria Guardieiro"},{"affiliations":["New York University, New York City, United States"],"email":"felipedeoliveira1407@gmail.com","is_corresponding":false,"name":"Felipe Inagaki de Oliveira"},{"affiliations":["Microsoft Research India, Bangalore, India"],"email":"harish.doraiswamy@microsoft.com","is_corresponding":false,"name":"Harish Doraiswamy"},{"affiliations":["University of Sao Paulo, Sao Carlos, Brazil"],"email":"gnonato@icmc.usp.br","is_corresponding":false,"name":"Luis Gustavo Nonato"},{"affiliations":["New York University, New York City, United States"],"email":"csilva@nyu.edu","is_corresponding":false,"name":"Claudio Silva"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1632","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1638","abstract":"Probability density function (PDF) curves are among the few charts on a Cartesian coordinate system that are commonly presented without y-axes. This design decision may be due to the lack of relevance of vertical scaling in normal PDFs. In fact, as long as two normal PDFs have the same mean and standard deviations (SDs), they can be scaled to occupy different amounts of vertical space while still remaining statistically identical. Because unscaled PDF height increases as SD decreases, visualization designers may find themselves tempted to vertically shrink low-SD PDFs to avoid occlusion or save white space in their figures. While irregular vertical scaling has been explored in bar and line charts, the visualization community has yet to investigate how this purely visual manipulation may affect reader comparisons of PDFs. In this paper, we present two preregistered quantitative experiments (n=600, n=401) that systematically demonstrate that vertical scaling can lead to misinterpretations of PDFs. We also test visual interventions to mitigate misinterpretation. In some contexts, we find that including a y-axis reduces this effect. Overall, we find that keeping vertical scaling consistent, and therefore maintaining equal pixel areas under PDF curves, results in the highest likelihood of accurate comparisons. Our findings provide the first insights into the impact of vertical scaling on PDFs, and reveal the complicated nature of proportional area comparisons.","authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"racquel.fygenson@gmail.com","is_corresponding":true,"name":"Racquel Fygenson"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1638","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"The Impact of Vertical Scaling on Normal Probability Density Function Plots","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1642","abstract":"Despite the development of numerous visual analytics tools for event sequence data across various domains, including, but not limited to healthcare, digital marketing, and user behavior analysis, comparing these domain-specific investigations and transferring the results to new datasets and problem areas remain challenging. Task abstractions can help us go beyond domain-specific details, but existing visualization task abstractions are insufficient for event sequence visual analytics because they primarily focus on tabular datasets and often overlook automated analytical techniques. To address this gap, we propose a domain-agnostic multi-level task framework for event sequence analysis, derived from an analysis of 58 papers that present event sequence visualization systems. Our framework consists of four levels: objective, intent, strategy, and technique. Overall objectives identify the main goals of analysis. Intents comprises five high-level approaches adopted at each analysis step: augment data, simplify data, configure data, configure visualization, and create provenance. Each intent is accomplished through a number of strategies, for instance, data simplification can be achieved through aggregation, summarization, or segmentation. Finally, each strategy can be implemented by a set of techniques depending on the input and output components. We further show that techniques can be expressed through a quartet of action-input-output-criteria. We demonstrate the framework\u2019s power through mapping case studies and discuss its similarities and differences with previous event sequence task taxonomies.","authors":[{"affiliations":["University of Maryland, College Park, College Park, United States"],"email":"kzintas@umd.edu","is_corresponding":true,"name":"Kazi Tasnim Zinat"},{"affiliations":["University of Maryland, College Park, United States"],"email":"ssakhamu@terpmail.umd.edu","is_corresponding":false,"name":"Saimadhav Naga Sakhamuri"},{"affiliations":["University of Maryland, College Park, United States"],"email":"achen151@terpmail.umd.edu","is_corresponding":false,"name":"Aaron Sun Chen"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1642","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"A Multi-Level Task Framework for Event Sequence Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1681","abstract":"In recent years, the global adoption of electric vehicles (EVs) has surged, prompting a corresponding rise in the installation of charging stations. This proliferation has underscored the importance of expediting the deployment of charging infrastructure. Both academia and industry have thus devoted to addressing the charging station location problem (CSLP) to streamline this process. However, prevailing algorithms addressing CSLP are hampered by restrictive assumptions and computational overhead, leading to a dearth of comprehensive evaluations in the spatiotemporal dimensions. Consequently, their practical viability is restricted. Moreover, the placement of charging stations exerts a significant impact on both the road network and the power grid, which necessitates the evaluation of the potential post-deployment impacts on these interconnected networks holistically. In this study, we propose CSLens, a visual analytics system designed to inform charging station deployment decisions through the lens of coupled transportation and power networks. CSLens offers multiple visualizations and interactive features, empowering users to delve into the existing charging station layout, explore alternative deployment solutions, and assess the ensuring impact. To validate the efficacy of CSLens, we conducted two case studies and engaged in interviews with domain experts. Through these efforts, we substantiated the usability and practical utility of CSLens in enhancing the decision-making process surrounding charging station deployment. Our findings underscore CSLens\u2019s potential to serve as a valuable asset in navigating the complexities of charging infrastructure planning.","authors":[{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"zhangyt85@mail2.sysu.edu.cn","is_corresponding":false,"name":"Yutian Zhang"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"xulw8@mail2.sysu.edu.cn","is_corresponding":false,"name":"Liwen Xu"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"taoshc@mail2.sysu.edu.cn","is_corresponding":false,"name":"Shaocong Tao"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"guanqx3@mail.sysu.edu.cn","is_corresponding":false,"name":"Quanxue Guan"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"zenghp5@mail.sysu.edu.cn","is_corresponding":true,"name":"Haipeng Zeng"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1681","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"CSLens: Towards Better Deploying Charging Stations via Visual Analytics \u2014\u2014 A Coupled Networks Perspective","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1693","abstract":"We introduce a visual analysis method for multiple causality graphs with different outcome variables, namely, multi-outcome causality graphs. Multi-outcome causality graphs are important in healthcare for understanding multimorbidity and comorbidity. To support the visual analysis, we collaborated with medical experts to devise two comparative visualization techniques at different stages of the analysis process. First, a progressive visualization method is proposed for comparing multiple state-of-the-art causal discovery algorithms. The method can handle mixed-type datasets comprising both continuous and categorical variables and assist in the creation of a fine-tuned causality graph of a single outcome. Second, a comparative graph layout technique and specialized visual encodings are devised for the quick comparison of multiple causality graphs. In our visual analysis approach, analysts start by building individual causality graphs for each outcome variable, and then, multi-outcome causality graphs are generated and visualized with our comparative technique for analyzing differences and commonalities of these causality graphs. Evaluation includes quantitative measurements on benchmark datasets, a case study with a medical expert, and expert user studies with real-world health research data.","authors":[{"affiliations":["Institute of Medical Technology, Peking University Health Science Center, Beijing, China","National Institute of Health Data Science, Peking University, Beijing, China"],"email":"mengjiefan@bjmu.edu.cn","is_corresponding":true,"name":"Mengjie Fan"},{"affiliations":["Beihang University, Beijing, China","Peking University, Beijing, China"],"email":"yu.jinlu@qq.com","is_corresponding":false,"name":"Jinlu Yu"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"},{"affiliations":["Tongji College of Design and Innovation, Shanghai, China"],"email":"nan.cao@gmail.com","is_corresponding":false,"name":"Nan Cao"},{"affiliations":["Beijing University of Chinese Medicine, Beijing, China"],"email":"wanghuaiyuelva@126.com","is_corresponding":false,"name":"Huaiyu Wang"},{"affiliations":["Peking University, Beijing, China"],"email":"zhoulng@pku.edu.cn","is_corresponding":false,"name":"Liang Zhou"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1693","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Visual Analysis of Multi-outcome Causal Graphs","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1699","abstract":"Room-scale immersive data visualisations provide viewers a wide-scale overview of a large dataset, but to interact precisely with individual data points they typically have to navigate to change their point of view. In traditional screen-based visualisations, focus-and-context techniques allow visualisation users to keep a full dataset in view while making detailed selections. Such techniques have been studied extensively on desktop to allow precise selection within large data sets, but they have not been explored in immersive 3D modalities. In this paper we develop a novel immersive focus-and-context technique based on a ``magic portal'' metaphor adapted specifically for data visualisation scenarios. An extendable-hand interaction technique is used to place a portal close to the region of interest. The other end of the portal then opens comfortably within the user's physical reach such that they can reach through to precisely select individual data points. Through a controlled study with 24 participants, we find strong evidence that portals reduce overshoots in selection and overall hand trajectory length, reducing arm fatigue compared to ranged interaction without the portal. The portals also enable us to use a robot arm to provide haptic feedback for data within the limited volume of the portal region. We demonstrate applications for portal-based selection through two use-case scenarios.","authors":[{"affiliations":["Monash University, Melbourne, Australia"],"email":"dai.shaozhang@gmail.com","is_corresponding":true,"name":"Shaozhang Dai"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"yi.li5@monash.edu","is_corresponding":false,"name":"Yi Li"},{"affiliations":["The University of British Columbia (Okanagan Campus), Kelowna, Canada"],"email":"barrett.ens@ubc.ca","is_corresponding":false,"name":"Barrett Ens"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":false,"name":"Lonni Besan\u00e7on"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"tgdwyer@gmail.com","is_corresponding":false,"name":"Tim Dwyer"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1699","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1705","abstract":"Contour trees describe the topology of level sets in scalar fields and are widely used in topological data analysis and visualization. A main challenge for utilizing contour trees for large-scale scientific data is their computation at scale using high-performance computing. To address this challenge, recent work has introduced distributed hierarchical contour trees for distributed computation and storage of contour trees. However, effective use of these distributed structures in analysis and visualization requires subsequent computation of geometric properties and branch decomposition to support contour extraction and exploration. In this work, we introduce distributed algorithms for augmentation, hypersweeps, and branch decomposition that enable parallel computation of geometric properties, and support the use of distributed contour trees as a query structure for scientific exploration. We evaluate the parallel performance of these algorithms and apply them to identify and extract important contours for scientific visualization.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"mingzhefluorite@gmail.com","is_corresponding":true,"name":"Mingzhe Li"},{"affiliations":["University of Leeds, Leeds, United Kingdom"],"email":"h.carr@leeds.ac.uk","is_corresponding":false,"name":"Hamish Carr"},{"affiliations":["Lawrence Berkeley National Laboratory, Berkeley, United States"],"email":"oruebel@lbl.gov","is_corresponding":false,"name":"Oliver R\u00fcbel"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"},{"affiliations":["Lawrence Berkeley National Laboratory, Berkeley, United States"],"email":"ghweber@lbl.gov","is_corresponding":false,"name":"Gunther H Weber"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1705","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1708","abstract":"The widespread use of Deep Neural Networks (DNNs) has recently resulted in their application to challenging scientific visualization tasks. While advanced DNNs demonstrate impressive generalization abilities, understanding factors like prediction quality, confidence, robustness, and uncertainty is crucial. These insights aid application scientists in making informed decisions. However, DNNs lack inherent mechanisms to measure prediction uncertainty, prompting the creation of distinct frameworks for constructing robust uncertainty-aware models tailored to various visualization tasks. In this work, we develop uncertainty-aware implicit neural representations to model steady-state vector fields effectively. We comprehensively evaluate the efficacy of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout, aimed at enabling uncertainty-informed visual analysis of features within steady vector field data. Our detailed exploration using several vector data sets indicate that uncertainty-aware models generate informative visualization results of vector field features. Furthermore, incorporating prediction uncertainty improves the resilience and interpretability of our DNN model, rendering it applicable for the analysis of complex vector field data sets.","authors":[{"affiliations":["Indian Institute of Technology Kanpur , Kanpur, India"],"email":"atulkrfcb@gmail.com","is_corresponding":false,"name":"Atul Kumar"},{"affiliations":["Indian Institute of Technology Kanpur , Kanpur , India"],"email":"gsiddharth2209@gmail.com","is_corresponding":false,"name":"Siddharth Garg"},{"affiliations":["Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India"],"email":"soumya.cvpr@gmail.com","is_corresponding":true,"name":"Soumya Dutta"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1708","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1726","abstract":"User experience in data visualization is typically assessed through post-viewing self-reports, but these overlook the dynamic cognitive processes during interaction. This study explores the use of mind wandering as a dynamic measure during visualization exploration. Participants reported mind wandering while viewing visualizations from a pre-labeled visualization database and then provided quantitative ratings of trust, engagement, and design quality, along with qualitative descriptions and short-term/long-term recall assessments. Results show that mind wandering negatively affects short-term visualization recall and various post-viewing measures, particularly for visualizations with little text annotation. Further, the type of mind wandering impacts engagement and emotional response. Mind wandering also acts as a serial mediator between visualization design elements and post-viewing measures. Overall, this research underscores the importance of incorporating mind wandering as a dynamic measure in visualization design and evaluation, offering novel avenues for enhancing user engagement and comprehension.","authors":[{"affiliations":["Arizona State University, Tempe, United States"],"email":"aarunku5@asu.edu","is_corresponding":true,"name":"Anjana Arunkumar"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"},{"affiliations":["Arizona State University, Tempe, United States"],"email":"cbryan16@asu.edu","is_corresponding":false,"name":"Chris Bryan"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1726","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1730","abstract":"Understanding the input and output of data wrangling scripts is crucial for various tasks like debugging codes and onboarding new data. However, existing research on script understanding primarily focuses on revealing the process of data transformations, lacking the ability to analyze the potential scope, i.e., the space of script inputs and outputs. Meanwhile, constructing input/output space during script analysis is challenging, as the wrangling scripts could be semantically complex and diverse, and the association between different data objects is intricate. To facilitate data workers in understanding the input and output spaces of wrangling scripts, we summarize ten types of constraints to express table spaces, and build a mapping between data transformations and these constraints to guide the construction of the input/output for individual transformations. Then, we propose a constraint generation model for integrating table constraints across multiple transformations. Based on the model, we develop Ferry, an interactive system that extracts and visualizes the data constraints describing the input and output spaces of data wrangling scripts, thereby enabling users to grasp the high-level semantics of complex scripts and locate the origins of faulty data transformations. Besides, Ferry provides example input and output data to assist users in interpreting the extracted constraints, checking and resolving the conflicts between these constraints and any uploaded dataset. Ferry's effectiveness and usability are evaluated via a usage scenario and two case studies: the first assists users in onboarding new data and debugging scripts, while the second verifies input-output compatibility across data processing modules. Furthermore, an illustrative application is presented to demonstrate Ferry's flexibility.","authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"rickyluozs@gmail.com","is_corresponding":true,"name":"Zhongsu Luo"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"kaixiong@zju.edu.cn","is_corresponding":false,"name":"Kai Xiong"},{"affiliations":["Zhejiang University, Hangzhou,Zhejiang, China"],"email":"3220105578@zju.edu.cn","is_corresponding":false,"name":"Jiajun Zhu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"chenran928@zju.edu.cn","is_corresponding":false,"name":"Ran Chen"},{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":["Zhejiang University, Ningbo, China"],"email":"dweng@zju.edu.cn","is_corresponding":false,"name":"Di Weng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1730","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1738","abstract":"As a step towards improving visualization literacy, we investigated how students approach reading visualizations differently after taking a university-level visualization course. We asked students to verbally walk through their process of making sense of unfamiliar visualizations, and conducted a qualitative analysis of these walkthroughs. Our qualitative analysis found changes in students' walkthroughs consistent with explicit learning goals of visualization courses. After taking a visualization course, students also engaged with visualizations in more sophisticated ways not fully captured by explicit learning goals: they were more likely to exhibit design empathy by thinking critically about the tradeoffs behind why a chart was designed in a particular way, and were better able to deconstruct a chart to make sense of it. We also gave students a quantitative assessment of visualization literacy and found no evidence of scores improving after the class, likely because the test we used focused on a different set of skills than those emphasized in visualization classes. While current measurement instruments for visualization literacy are useful, we propose developing standardized assessments for additional aspects of visualization literacy, such as deconstruction and design empathy. We also suggest those additional aspects could be made more explicit in learning goals set by visualization educators. All supplemental materials are available at https://osf.io/w5pum/?view_only=f9eca3fa4711425582d454031b9c482e.","authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"maryam.hedayati@u.northwestern.edu","is_corresponding":true,"name":"Maryam Hedayati"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1738","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"What University Students Learn In Visualization Classes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1746","abstract":"Hypergraphs provide a natural way to represent polyadic relationships in network data. For large hypergraphs, it is often difficult to visually detect structures within the data. Recently, a scalable polygon-based visualization framework was developed allowing hypergraphs with thousands of hyperedges to be simplified and examined at different levels of detail. However, this approach does not consider structures such as cycles, bridges, and branches. Consequently, structures can be lost at simplified scales, making interpretations for real-world applications unreliable. In this paper, we define hypergraph structures using the bipartite graph representation. Powered by our analysis, we provide an algorithm to decompose large hypergraphs into meaningful features and to identify regions of non-planarity. We also introduce a set of topology preserving and topology altering atomic operations, enabling the preservation of important structures while removing topological noise in simplified scales. We demonstrate our approach in several real-world applications.","authors":[{"affiliations":["Oregon State University, Corvallis, United States"],"email":"oliverpe@oregonstate.edu","is_corresponding":false,"name":"Peter D Oliver"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhange@eecs.oregonstate.edu","is_corresponding":true,"name":"Eugene Zhang"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhangyue@oregonstate.edu","is_corresponding":false,"name":"Yue Zhang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1746","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Structure-Aware Simplification for Hypergraph Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1770","abstract":"The semantic similarity between documents of a text corpus can be visualized using map-like metaphors based on two-dimensional scatterplot layouts. These layouts result from a dimensionality reduction on the document-term matrix or a representation within a latent embedding, including topic models. Thereby, the resulting layout depends on the input data and hyperparameters of the dimensionality reduction and is therefore affected by changes in them. Furthermore, the resulting layout is affected by changes in the input data and hyperparameters of the dimensionality reduction. However, such changes to the layout require additional cognitive efforts from the user. In this work, we present a sensitivity study that analyzes the stability of these layouts concerning (1) changes in the text corpora, (2) changes in the hyperparameter, and (3) randomness in the initialization. Our approach has two stages: data measurement and data analysis. First, we derived layouts for the combination of three text corpora and six text embeddings and a grid-search-inspired hyperparameter selection of the dimensionality reductions. Afterward, we quantified the similarity of the layouts through ten metrics, concerning local and global structures and class separation. Second, we analyzed the resulting 42817 tabular data points in a descriptive statistical analysis. From this, we derived guidelines for informed decisions on the layout algorithm and highlight specific hyperparameter settings. We provide our implementation and results as a Git repository at https://github.com/hpicgs/Topic-Models-and-Dimensionality-Reduction-Sensitivity-Study .","authors":[{"affiliations":["University of Potsdam, Digital Engineering Faculty, Hasso Plattner Institute, Potsdam, Germany"],"email":"daniel.atzberger@hpi.de","is_corresponding":true,"name":"Daniel Atzberger"},{"affiliations":["University of Potsdam, Potsdam, Germany"],"email":"tcech@uni-potsdam.de","is_corresponding":false,"name":"Tim Cech"},{"affiliations":["Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany"],"email":"willy.scheibel@hpi.de","is_corresponding":false,"name":"Willy Scheibel"},{"affiliations":["Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany"],"email":"juergen.doellner@hpi.de","is_corresponding":false,"name":"J\u00fcrgen D\u00f6llner"},{"affiliations":["Utrecht University, Utrecht, Netherlands"],"email":"m.behrisch@uu.nl","is_corresponding":false,"name":"Michael Behrisch"},{"affiliations":["Graz University of Technology, Graz, Austria"],"email":"tobias.schreck@cgv.tugraz.at","is_corresponding":false,"name":"Tobias Schreck"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1770","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1793","abstract":"This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale complexes, and the discrepancies in topology between original and decompressed datasets could potentially result in erroneous interpretations or even incorrect scientific conclusions. In this paper, we focus on preserving Morse-Smale segmentations in 2D/3D piecewise linear scalar fields, targeting the precise reconstruction of minimum/maximum labels induced by the integral curve of each vertex. The key is to derive a series of edits during compression time; the edits are applied to the decompressed data, leading to an accurate reconstruction of segmentations while keeping the error within the prescribed error bound. To this end, we developed a workflow to fix extrema and integral curves alternatively until convergence within finite iterations; we accelerate each workflow component with shared-memory/GPU parallelism to make the performance practical for coupling with compressors. We demonstrate use cases with fluid dynamics, ocean, and cosmology application datasets with a 1000x acceleration with an NVIDIA A100 GPU.","authors":[{"affiliations":["The Ohio State University, Columbus, United States"],"email":"li.14025@osu.edu","is_corresponding":true,"name":"Yuxiao Li"},{"affiliations":["University of California, Riverside, Riverside, United States"],"email":"xlian007@ucr.edu","is_corresponding":false,"name":"Xin Liang"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"qiu.722@osu.edu","is_corresponding":false,"name":"Yongfeng Qiu"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"lyan@anl.gov","is_corresponding":false,"name":"Lin Yan"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1793","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1802","abstract":"In the biomedical domain, visualizing the document embeddings of an extensive corpus has been widely used in information-seeking tasks. However, three key challenges with existing visualizations make it difficult for clinicians to find information efficiently. First, the document embeddings used in these visualizations are generated statically by pretrained language models, which cannot adapt to the user's evolving interest. Second, existing document visualization techniques cannot effectively display how the documents are relevant to users\u2019 interest, making it difficult for users to identify the most pertinent information. Third, existing embedding generation and visualization processes suffer from a lack of interpretability, making it difficult to understand, trust and use the result for decision-making. In this paper, we present a novel visual analytics pipeline for user-driven document representation and iterative information seeking (VADIS). VADIS introduces a prompt-based attention model (PAM) that generates dynamic document embedding and document relevance adjusted to the user's query. To effectively visualize these two pieces of information, we design a new document map that leverages a circular grid layout to display documents based on both their relevance to the query and the semantic similarity. Additionally, to improve the interpretability, we introduce a corpus-level attention visualization method to improve the user's understanding of the model focus and to enable the users to identify potential oversight. This visualization, in turn, empowers users to refine, update and introduce new queries, thereby facilitating a dynamic and iterative information-seeking experience. We evaluated VADIS quantitatively and qualitatively on a real-world dataset of biomedical research papers to demonstrate its effectiveness.","authors":[{"affiliations":["Ohio State University, Columbus, United States"],"email":"qiu.580@buckeyemail.osu.edu","is_corresponding":true,"name":"Rui Qiu"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"tu.253@osu.edu","is_corresponding":false,"name":"Yamei Tu"},{"affiliations":["Washington University School of Medicine in St. Louis, St. Louis, United States"],"email":"yenp@wustl.edu","is_corresponding":false,"name":"Po-Yin Yen"},{"affiliations":["The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1802","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1803","abstract":"Scalar field comparison is a fundamental task in scientific visualization. In topological data analysis, we compare topological descriptors of scalar fields---such as persistence diagrams and merge trees---as they provide succinct and robust abstract representations. While several similarity measures for topological descriptors seem to be both asymptotically and practically efficient with polynomial time algorithms, they do not scale well when handling large-scale, time-varying scientific data and ensembles. In this paper, we propose a new framework to facilitate the comparative analysis of merge trees, inspired by tools from locality sensitive hashing (LSH). LSH hashes similar objects into the same hash buckets with high probability. We propose two new similarity measures for merge trees that can be computed via LSH, using new extensions to Recursive MinHash and subpath signature, respectively. Our similarity measures are extremely efficient to compute and closely resemble the results of existing measures such as merge tree edit distance or geometric interleaving distance. Our experiments demonstrate the utility of our LSH framework in applications such as shape matching, clustering, key event detection, and ensemble summarization.","authors":[{"affiliations":["University of Utah, SALT LAKE CITY, United States"],"email":"lyuweiran@gmail.com","is_corresponding":false,"name":"Weiran Lyu"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"g.s.raghavendra@gmail.com","is_corresponding":true,"name":"Raghavendra Sridharamurthy"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jeffp@cs.utah.edu","is_corresponding":false,"name":"Jeff M. Phillips"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1803","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1805","abstract":"he optimization of cooling systems is important in many cases, for example for cabin and battery cooling in electric cars. Such an optimization is governed by multiple, conflicting objectives and it is performed across a multi-dimensional parameter space. The extent of the parameter space, the complexity of the non-linear model of the system, as well as the time needed per simulation run and factors that are not modeled in the simulation necessitate an iterative, semi-automatic approach. We present an interactive visual optimization approach, where the user works with a p-h diagram to steer an iterative, guided optimization process. A deep learning (DL) model provides estimates for parameters, given a target characterization of the system, while numerical simulation is used to predict system characteristics for an ensemble of parameter sets. Since the DL model only serves as an approximation of the inverse of the cooling system and since target characteristics can be chosen according to different, competing objectives, an iterative optimization process is realized, developing multiple sets of intermediate solutions, which are visually related to each other. The standard p-h diagram, integrated interactively in this approach, is complemented by a dual, also interactive visual representation of additional expressive measures representing the system characteristics. We show how the known four-points semantic of the p-h diagram meaningfully transfers to the dual data representation. When evaluating this approach with our partners in the automotive domain, we found that our solution helped with the overall comprehension of the cooling system and that it lead to a faster convergence during optimization.","authors":[{"affiliations":["VRVis Research Center, Vienna, Austria"],"email":"splechtna@vrvis.at","is_corresponding":false,"name":"Rainer Splechtna"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"behravan@vt.edu","is_corresponding":false,"name":"Majid Behravan"},{"affiliations":["AVL AST doo, Zagreb, Croatia"],"email":"mario.jelovic@avl.com","is_corresponding":false,"name":"Mario Jelovic"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"gracanin@vt.edu","is_corresponding":false,"name":"Denis Gracanin"},{"affiliations":["University of Bergen, Bergen, Norway"],"email":"helwig.hauser@uib.no","is_corresponding":false,"name":"Helwig Hauser"},{"affiliations":["VRVis Research Center, Vienna, Austria"],"email":"matkovic@vrvis.at","is_corresponding":true,"name":"Kresimir Matkovic"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1805","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Interactive Design-of-Experiments: Optimizing a Cooling System","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1809","abstract":"Visualizing relational data is crucial for understanding complex connections between entities in social networks, political affiliations, or biological interactions. Well-known representations like node-link diagrams and adjacency matrices offer valuable insights, but their effectiveness relies on the ability to identify patterns in the underlying topological structure. Reordering strategies and layout algorithms play a vital role in the visualization process since the arrangement of nodes, edges, or cells influences the visibility of these patterns. The BioFabric visualization combines elements of node-link diagrams and adjacency matrices, leveraging the strengths of both, the visual clarity of node-link diagrams and the tabular organization of adjacency matrices. A unique characteristic of BioFabric is the possibility to reorder nodes and edges separately. This raises the question of which combination of layout algorithms best reveals certain patterns. In this paper, we discuss patterns and anti-patterns in BioFabric, such as staircases or escalators, relate them to already established patterns, and propose metrics to evaluate their quality. Based on these quality metrics, we compared combinations of well-established reordering techniques applied to BioFabric with a well-known benchmark data set. Our experiments indicate that the edge order has a stronger influence on revealing patterns than the node layout. The results show that the best combination for revealing staircases is a barycentric node layout, together with an edge order based on node indices and length. Our research contributes a first building block for many promising future research directions, which we also share and discuss. A free copy of this paper and all supplemental materials are available at OSF.","authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"fuchs@dbvis.inf.uni-konstanz.de","is_corresponding":true,"name":"Johannes Fuchs"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"alexander.frings@uni-konstanz.de","is_corresponding":false,"name":"Alexander Frings"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"maria-viktoria.heinle@uni-konstanz.de","is_corresponding":false,"name":"Maria-Viktoria Heinle"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"sara.di-bartolomeo@uni-konstanz.de","is_corresponding":false,"name":"Sara Di Bartolomeo"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1809","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1810","abstract":"Classical bibliography, by scrutinizing preserved catalogs from both official archives and personal collections of accumulated books, examines the books throughout history, thereby elucidating cultural development across historical periods. In this work, we collaborate with domain experts to accomplish the task of data annotation concerning Chinese ancient catalogs. We introduce the CataAnno system that facilitates users in completing annotations more efficiently through cross-linked views, recommendation methods and convenient annotation interactions. The recommendation method can learn the background knowledge and annotation patterns that experts subconsciously integrate into the data during prior annotation processes. CataAnno searches for the most relevant examples previously annotated and recommends to the user. Meanwhile, the cross-linked views assist users in comprehending the correlations between entries and offer explanations for these recommendations. Evaluation and expert feedback confirm that the CataAnno system, by offering high-quality recommendations and visualizing the relationships between entries, can mitigate the necessity for specialized knowledge during the annotation process. This results in enhanced accuracy and consistency in annotations, thereby enhancing the overall efficiency","authors":[{"affiliations":["Peking University, Beijing, China"],"email":"hanning.shao@pku.edu.cn","is_corresponding":true,"name":"Hanning Shao"},{"affiliations":["Peking University, Beijing, China"],"email":"xiaoru.yuan@pku.edu.cn","is_corresponding":false,"name":"Xiaoru Yuan"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1810","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1830","abstract":"Over the past decade, several urban visual analytics systems have been proposed to tackle a host of challenges faced by cities, in areas as diverse as transportation, weather, and real estate. Many of these systems have been designed through engagement with urban experts, aiming to distill intricate urban analysis workflows into interactive visualizations and interfaces. The design, implementation, and practical use of these systems, however, still rely on siloed approaches that lead to bespoke tools that are hard to reproduce and extend. At the design level, these systems undervalue rich data workflows from urban experts by usually only treating them as data providers and evaluators. At the implementation level, these systems lack interoperability with other technical frameworks. At the practical use level, these systems tend to be narrowly focused on specific fields, inadvertently creating barriers for cross-domain collaboration. To tackle these gaps, we present Curio, a framework for collaborative urban visual analytics. Curio uses a dataflow model with multiple abstraction levels (code, grammar, GUI elements) to facilitate collaboration across the design and implementation of visual analytics components. The framework allows experts to intertwine preprocessing, managing, and visualization stages while tracking provenance of code and visualizations. In collaboration with urban experts, we evaluate Curio through a diverse series of use cases targeting urban accessibility, urban microclimate, and sunlight access. These cases use different types of urban data and domain methodologies to illustrate Curio's flexibility in tackling pressing societal challenges.","authors":[{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"gmorei3@uic.edu","is_corresponding":false,"name":"Gustavo Moreira"},{"affiliations":["Massachusetts Institute of Technology , Somerville, United States"],"email":"maryamh@mit.edu","is_corresponding":false,"name":"Maryam Hosseini"},{"affiliations":["University of Illinois Urbana-Champaign, Urbana-Champaign, United States"],"email":"carolinavfs@id.uff.br","is_corresponding":false,"name":"Carolina Veiga Ferreira de Souza"},{"affiliations":["Universidade Federal Fluminense, Niteroi, Brazil"],"email":"lucasalexandre.s.cc@gmail.com","is_corresponding":false,"name":"Lucas Alexandre"},{"affiliations":["Politecnico di Milano, Milano, Italy"],"email":"nicola.colaninno@polimi.it","is_corresponding":false,"name":"Nicola Colaninno"},{"affiliations":["Universidade Federal Fluminense, Niter\u00f3i, Brazil"],"email":"danielcmo@ic.uff.br","is_corresponding":false,"name":"Daniel de Oliveira"},{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"nivan@cin.ufpe.br","is_corresponding":false,"name":"Nivan Ferreira"},{"affiliations":["Universidade Federal Fluminense , Niteroi, Brazil"],"email":"mlage@ic.uff.br","is_corresponding":false,"name":"Marcos Lage"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"fabiom@uic.edu","is_corresponding":true,"name":"Fabio Miranda"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1830","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1831","abstract":"When using exploratory visual analysis to examine multivariate hierarchical data, users often need to query data to narrow down the scope of analysis. However, formulating effective query expressions remains a challenge for multivariate hierarchical data, particularly when datasets become very large. To address this issue, we develop a declarative grammar, HiRegEx (Hierarchical data Regular Expression), for querying and exploring multivariate hierarchical data. Rooted in the extended multi-level task topology framework for tree visualizations (e-MLTT), HiRegEx delineates three query targets (node, path, and subtree) and two aspects for querying these targets (features and positions), and uses operators developed based on classical regular expressions for query construction. We develop a prototype system, TreeQueryER, to integrate an exploratory framework for querying and exploring multivariate hierarchical data based on HiRegEx. The exploratory framework includes three major components: top-down pattern specification, bottom-up data-driven inquiry, and context-creation data overview. We validate the expressiveness of HiRegEx with the tasks from the e-MLTT framework and showcase its utility and effectiveness through a usage scenario involving expert users in the analysis of a citation tree dataset.","authors":[{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"guozhg.li@gmail.com","is_corresponding":true,"name":"Guozheng Li"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"haotian.mi1@gmail.com","is_corresponding":false,"name":"haotian mi"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"liuchi02@gmail.com","is_corresponding":false,"name":"Chi Harold Liu"},{"affiliations":["Ochanomizu University, Tokyo, Japan"],"email":"itot@is.ocha.ac.jp","is_corresponding":false,"name":"Takayuki Itoh"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"wanggrbit@126.com","is_corresponding":false,"name":"Guoren Wang"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1831","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1833","abstract":"The concept of an intelligent augmented reality (AR) assistant has applications as significant as they are wide-ranging, with potential uses in medicine, military endeavors, and mechanics. Such an assistant must be able to perceive the performer\u2019s environment and actions, reason about the state of the environment in relation to a given task, and seamlessly interact with the performer. These interactions typically involve an AR headset equipped with a variety of sensors which capture video, audio, and haptic feedback. Previous works have sought to facilitate the development of such an assistant by visualizing these sensor data streams as well as the machine learning model outputs that support an assistant\u2019s perception and reasoning capabilities. However, existing visual analytics systems do not include biometric data or focus on user modeling, and are only capable of visualizing a single task session for a single performer at a time. Furthermore, they mainly focus on traditional task analysis that typically assumes a linear progression from one step to the next. We propose a visual analytics system that allows users to compare performance during multiple task sessions focusing on non-linear tasks where different paths or sequences can lead to the successful completion of the task. In particular, we design visualizations for understanding user behavior through functional near-infrared spectroscopy (fNIRS) data as a proxy for perception, attention, and memory as well as corresponding motion data (acceleration, angular velocity, and eye gaze). We distill these insights into visual embeddings that allow users to easily select groups of sessions with similar behaviors. We provide case studies that explore how insights into task performance can be gleaned from these visualizations using data collected during helicopter copilot training tasks. Finally, we evaluate our approach by conducting an in-depth examination of a think-aloud experiment with five domain experts.","authors":[{"affiliations":["New York University, New York, United States"],"email":"s.castelo@nyu.edu","is_corresponding":true,"name":"Sonia Castelo Quispe"},{"affiliations":["New York University, New York, United States"],"email":"jlrulff@gmail.com","is_corresponding":false,"name":"Jo\u00e3o Rulff"},{"affiliations":["New York University, Brooklyn, United States"],"email":"pss442@nyu.edu","is_corresponding":false,"name":"Parikshit Solunke"},{"affiliations":["New York University, New York, United States"],"email":"erin.mcgowan@nyu.edu","is_corresponding":false,"name":"Erin McGowan"},{"affiliations":["New York University, New York CIty, United States"],"email":"guandewu@nyu.edu","is_corresponding":false,"name":"Guande Wu"},{"affiliations":["New York University, Brooklyn, United States"],"email":"iran@ccrma.stanford.edu","is_corresponding":false,"name":"Iran Roman"},{"affiliations":["New York University, New York, United States"],"email":"rlopez@nyu.edu","is_corresponding":false,"name":"Roque Lopez"},{"affiliations":["New York University, Brooklyn, United States"],"email":"bs3639@nyu.edu","is_corresponding":false,"name":"Bea Steers"},{"affiliations":["New York University, New York, United States"],"email":"qisun@nyu.edu","is_corresponding":false,"name":"Qi Sun"},{"affiliations":["New York University, New York, United States"],"email":"jpbello@nyu.edu","is_corresponding":false,"name":"Juan Pablo Bello"},{"affiliations":["Northrop Grumman Mission Systems, Redondo Beach, United States"],"email":"bradley.feest@ngc.com","is_corresponding":false,"name":"Bradley S Feest"},{"affiliations":["Northrop Grumman, Aurora, United States"],"email":"michael.middleton@ngc.com","is_corresponding":false,"name":"Michael Middleton"},{"affiliations":["Northrop Grumman, Falls Church, United States"],"email":"ryan.mckendrick@ngc.com","is_corresponding":false,"name":"Ryan McKendrick"},{"affiliations":["New York University, New York City, United States"],"email":"csilva@nyu.edu","is_corresponding":false,"name":"Claudio Silva"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1833","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1836","abstract":"Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Although shapes can be a finite number compared to colors, they can not be represented by a numerical space, making it difficult to propose a general guideline for shape choices or shed light on the design heuristics of designer-crafted shape palettes. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks -- relative mean judgment tasks, expert choices, and data correlation estimation. Given how complex and tangled results are, rather than relying on conventional features for modeling, we built a model and introduced a corresponding design tool that offers recommendations for shape encodings. The perceptual effectiveness of shapes significantly varies across specific pairs, and certain shapes may enhance perceptual efficiency and accuracy. However, how performance varies does not map well to classical features of shape such as angles, fill, or convex hull. We developed a model based on pairwise relations between shapes measured in our experiments and the number of shapes required to intelligently recommend shape palettes for a given design. This tool provides designers with agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances the understanding of shape perception in visualization contexts and provides practical design guidelines for advanced shape usage in visualization design that optimize perceptual efficiency.","authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"chint@cs.unc.edu","is_corresponding":true,"name":"Chin Tseng"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":false,"name":"Ghulam Jilani Quadri"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1836","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"An Empirically Grounded Approach for Designing Shape Palettes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1865","abstract":"In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics (IVD) consumables poses a significant threat to patients. Objective data-driven decision making on the severity of contamination is key for reducing risk to patients, while saving time and cost in the quality assessment process. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings of the current process are analysis problems, like weak support in exploring thousands of particle images, associated attributes, and ineffective knowledge externalization for sense-making. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study\u2019s learnings, and a generalizable approach for knowledge externalization. DaedalusData is a visual analytics system that empowers domain experts to explore particle contamination patterns, to label particles in label alphabets, and to externalize knowledge through semi-supervised label-informed data projections. The results of our case study show that DaedalusData supports experts in generating meaningful, comprehensive data overviews. Additionally, our user study evaluation shows high usability of DaedalusData and efficiently supports the labeling of large quantities of particles, and utilizes externalized knowledge to augment the dataset. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalabilty and trade-offs that come with the adoption of this approach in practice.","authors":[{"affiliations":["University of Z\u00fcrich, Z\u00fcrich, Switzerland","Roche pRED, Basel, Switzerland"],"email":"alexander.wyss@protonmail.com","is_corresponding":true,"name":"Alexander Wyss"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"gab.morgenshtern@gmail.com","is_corresponding":false,"name":"Gabriela Morgenshtern"},{"affiliations":["Roche Diagnostics International, Rotkreuz, Switzerland"],"email":"a.hirschhuesler@gmail.com","is_corresponding":false,"name":"Amanda Hirsch-H\u00fcsler"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"bernard@ifi.uzh.ch","is_corresponding":false,"name":"J\u00fcrgen Bernard"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1865","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1866","abstract":"Feature grid Scene Representation Networks (SRNs) have been applied to scientific data as compact functional surrogates for analysis and visualization. As SRNs are black-box lossy data representations, assessing the prediction quality is critical for scientific visualization applications to ensure that scientists can trust the information being visualized. Currently, existing architectures do not support inference time reconstruction quality assessment, as voxel-wise errors cannot be evaluated in the absence of ground truth data. By employing uncertain neural network architectures in feature grid SRNs, we obtain prediction variances during inference time to facilitate confidence-aware data reconstruction. Specifically, we propose a parameter-efficient multi-decoder Ensemble SRN (E-SRN) architecture consisting of a shared feature grid with multiple lightweight multi-layer perceptron decoders. E-SRN can generate a set of plausible predictions for a given input coordinate to compute the mean as the ensemble prediction and the variance as a confidence score. The voxel-wise variance can be rendered along with the data to inform the reconstruction quality, or be integrated into uncertainty-aware volume visualization algorithms. To prevent the misalignment between the quantified variance and the prediction quality, we propose a novel variance regularization loss for ensemble learning that promotes the Regularized Ensemble SRN (RE-SRN) to obtain a more reliable variance that correlates closely to the true model error. We comprehensively evaluate the quality of variance quantification and data reconstruction of Monte Carlo Dropout (MCD), Mean Field Variational Inference (MFVI), Deep Ensemble (DE), and Predicting Variance (PV) in comparison with our proposed E-SRN and RE-SRN applied to state-of-the-art feature grid SRNs across diverse scalar field datasets. We demonstrate that RE-SRN attains the most accurate data reconstruction and competitive variance-error correlation among uncertain SRNs under the same neural network parameter budgets. Furthermore, we present an adaptation of uncertainty-aware volume rendering and shed light on the potential of incorporating uncertain predictions in improving the quality of volume rendering for uncertain SRNs. Through ablation studies on the regularization strength and ensemble size, we show that E-SRN and RE-SRN are expected to perform sufficiently well with a default configuration without requiring customized hyperparameter settings for different datasets.","authors":[{"affiliations":["The Ohio State University, Columbus, United States"],"email":"xiong.336@osu.edu","is_corresponding":true,"name":"Tianyu Xiong"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"wurster.18@osu.edu","is_corresponding":false,"name":"Skylar Wolfgang Wurster"},{"affiliations":["The Ohio State University, Columbus, United States","Argonne National Laboratory, Lemont, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"tpeterka@mcs.anl.gov","is_corresponding":false,"name":"Tom Peterka"},{"affiliations":["The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1866","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1874","abstract":"A layered network is an important category of graph in which every node is assigned to a layer and layers are drawn as parallel or radial lines. They are commonly used to display temporal data or hierarchical networks. Previous research has demonstrated that minimizing edge crossings is the most important criterion to consider when looking to improve the readability of such networks. While heuristic approaches exist for crossing minimization, we are interested in optimal approaches to the problem that prioritize human readability over computational scalability. We aim to improve the usefulness and applicability of such optimal methods by understanding and improving their scalability to larger graphs. This paper categorizes and evaluates the state-of-the-art linear programming formulations for exact crossing minimization and describes nine new and existing techniques that could plausibly accelerate the optimization algorithm. Through a computational evaluation, we explore each technique's effect on calculation time and how the techniques assist or inhibit one another, allowing researchers and practitioners to adapt them to the characteristics of their networks. Our best-performing techniques yielded a median improvement of 2.5--17x depending on the solver used, giving us the capability to create optimal layouts faster and for larger networks. We provide an open-source implementation of our methodology in Python, where users can pick which combination of techniques to enable according to their use case. A free copy of this paper and all supplemental materials, datasets used, and source code are available at {https://osf.io/}.","authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"wilson.conn@northeastern.edu","is_corresponding":true,"name":"Connor Wilson"},{"affiliations":["Northeastern University, Boston, United States"],"email":"eduardopuertac@gmail.com","is_corresponding":false,"name":"Eduardo Puerta"},{"affiliations":["northeastern university, Boston, United States"],"email":"turokhunter@gmail.com","is_corresponding":false,"name":"Tarik Crnovrsanin"},{"affiliations":["University of Konstanz, Konstanz, Germany","Northeastern University, Boston, United States"],"email":"sara.di-bartolomeo@uni-konstanz.de","is_corresponding":false,"name":"Sara Di Bartolomeo"},{"affiliations":["Northeastern University, Boston, United States"],"email":"c.dunne@northeastern.edu","is_corresponding":false,"name":"Cody Dunne"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1874","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1880","abstract":"Merge trees are a valuable tool in scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the merge tree neural networks (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how graph neural networks (GNNs), which emerged as an effective encoder for graphs, can be trained to produce embeddings of merge trees in vector spaces that enable efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100x on the benchmark datasets while maintaining an error rate below 0.1%.","authors":[{"affiliations":["Tulane University, New Orleans, United States"],"email":"yqin2@tulane.edu","is_corresponding":true,"name":"Yu Qin"},{"affiliations":["Montana State University, Bozeman, United States"],"email":"brittany.fasy@montana.edu","is_corresponding":false,"name":"Brittany Terese Fasy"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"cwenk@tulane.edu","is_corresponding":false,"name":"Carola Wenk"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"bsumma@tulane.edu","is_corresponding":false,"name":"Brian Summa"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1880","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Rapid and Precise Topological Comparison with Merge Tree Neural Networks","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-full-1917","abstract":"The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefits to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifier users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically \u201csee\u201d the chart content. They ascribed these challenges mainly to the magnification-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method significantly improved the usability of charts over both the status quo screen magnifier and a state-of-the-art space compaction-based solution.","authors":[{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"yprak001@odu.edu","is_corresponding":true,"name":"Yash Prakash"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"pkhan002@odu.edu","is_corresponding":false,"name":"Pathan Aseef Khan"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"anaya001@odu.edu","is_corresponding":false,"name":"Akshay Kolgar Nayak"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"uksjayarathna@gmail.com","is_corresponding":false,"name":"Sampath Jayarathna"},{"affiliations":["Michigan State University, East Lansing, United States"],"email":"leehaena@msu.edu","is_corresponding":false,"name":"Hae-Na Lee"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"vganjigu@odu.edu","is_corresponding":false,"name":"Vikas Ashok"}],"award":"","doi":"","event_id":"v-full","event_title":"VIS Full Papers","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-full-1917","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"full0","session_room":"None","session_title":"Full Papers","session_uid":"v-full","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Full Papers"],"time_stamp":"","title":"Towards Enhancing Low Vision Usability of Data Charts on Smartphones","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233299602","abstract":"Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Sungwon In"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tica Lin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chris North"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hanspeter Pfister"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yalong Yang"}],"award":"","doi":"10.1109/TVCG.2023.3299602","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233299602","image_caption":"","keywords":["Immersive Analytics, Data Transformation, Data Science, Interaction, Empirical Study, Virtual/Augmented/Mixed Reality"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233310019","abstract":"The dynamic network visualization design space consists of two major dimensions: network structural and temporal representation. As more techniques are developed and published, a clear need for evaluation and experimental comparisons between them emerges. Most studies explore the temporal dimension and diverse interaction techniques supporting the participants, focusing on a single structural representation. Empirical evidence about performance and preference for different visualization approaches is scattered over different studies, experimental settings, and tasks. This paper aims to comprehensively investigate the dynamic network visualization design space in two evaluations. First, a controlled study assessing participants' response times, accuracy, and preferences for different combinations of network structural and temporal representations on typical dynamic network exploration tasks, with and without the support of standard interaction methods. Second, the best-performing combinations from the first study are enhanced based on participants' feedback and evaluated in a heuristic-based qualitative study with visualization experts on a real-world network. Our results highlight node-link with animation and playback controls as the best-performing combination and the most preferred based on ratings. Matrices achieve similar performance to node-link in the first study but have considerably lower scores in our second evaluation. Similarly, juxtaposition exhibits evident scalability issues in more realistic analysis contexts.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Velitchko Filipov"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alessio Arleo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Markus B\u00f6gl"},{"affiliations":"","email":"","is_corresponding":false,"name":"Silvia Miksch"}],"award":"","doi":"10.1109/TVCG.2023.3310019","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233310019","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"On Network Structural and Temporal Encodings: A Space and Time Odyssey","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233302308","abstract":"We visualize the predictions of multiple machine learning models to help biologists as they interactively make decisions about cell lineage---the development of a (plant) embryo from a single ovum cell. Based on a confocal microscopy dataset, traditionally biologists manually constructed the cell lineage, starting from this observation and reasoning backward in time to establish their inheritance. To speed up this tedious process, we make use of machine learning (ML) models trained on a database of manually established cell lineages to assist the biologist in cell assignment. Most biologists, however, are not familiar with ML, nor is it clear to them which model best predicts the embryo's development. We thus have developed a visualization system that is designed to support biologists in exploring and comparing ML models, checking the model predictions, detecting possible ML model mistakes, and deciding on the most likely embryo development. To evaluate our proposed system, we deployed our interface with six biologists in an observational study. Our results show that the visual representations of machine learning are easily understandable, and our tool, LineageD+, could potentially increase biologists' working efficiency and enhance the understanding of embryos.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Jiayi Hong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ross Maciejewski"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alain Trubuil"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tobias Isenberg"}],"award":"","doi":"10.1109/TVCG.2023.3302308","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233302308","image_caption":"","keywords":["Visualization, visual analytics, machine learning, comparing ML predictions, human-AI teaming, plant biology, cell lineage"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233275925","abstract":"A contiguous area cartogram is a geographic map in which the area of each region is proportional to numerical data (e.g., population size) while keeping neighboring regions connected. In this study, we investigated whether value-to-area legends (square symbols next to the values represented by the squares' areas) and grid lines aid map readers in making better area judgments. We conducted an experiment to determine the accuracy, speed, and confidence with which readers infer numerical data values for the mapped regions. We found that, when only informed about the total numerical value represented by the whole cartogram without any legend, the distribution of estimates for individual regions was centered near the true value with substantial spread. Legends with grid lines significantly reduced the spread but led to a tendency to underestimate the values. Comparing differences between regions or between cartograms revealed that legends and grid lines slowed the estimation without improving accuracy. However, participants were more likely to complete the tasks when legends and grid lines were present, particularly when the area units represented by these features could be interactively selected. We recommend considering the cartogram's use case and purpose before deciding whether to include grid lines or an interactive legend.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Kelvin L. T. Fung"},{"affiliations":"","email":"","is_corresponding":false,"name":"Simon T. Perrault"},{"affiliations":"","email":"","is_corresponding":false,"name":"Michael T. Gastner"}],"award":"","doi":"10.1109/TVCG.2023.3275925","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233275925","image_caption":"","keywords":["Task Analysis, Symbols, Data Visualization, Sociology, Visualization, Switches, Mice, Cartogram, Geovisualization, Interactive Data Exploration, Quantitative Evaluation"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233289292","abstract":"Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Cindy Xiong Bearfield"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chase Stokes"},{"affiliations":"","email":"","is_corresponding":false,"name":"Andrew Lovett"},{"affiliations":"","email":"","is_corresponding":false,"name":"Steven Franconeri"}],"award":"","doi":"10.1109/TVCG.2023.3289292","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233289292","image_caption":"","keywords":["comparison, perception, visual grouping, bar charts, verbal conclusions."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233316469","abstract":"Automated visualization recommendation facilitates the rapid creation of effective visualizations, which is especially beneficial for users with limited time and limited knowledge of data visualization. There is an increasing trend in leveraging machine learning (ML) techniques to achieve an end-to-end visualization recommendation. However, existing ML-based approaches implicitly assume that there is only one appropriate visualization for a specific dataset, which is often not true for real applications. Also, they often work like a black box, and are difficult for users to understand the reasons for recommending specific visualizations. To fill the research gap, we propose AdaVis, an adaptive and explainable approach to recommend one or multiple appropriate visualizations for a tabular dataset. It leverages a box embedding-based knowledge graph to well model the possible one-to-many mapping relations among different entities (i.e., data features, dataset columns, datasets, and visualization choices). The embeddings of the entities and relations can be learned from dataset-visualization pairs. Also, AdaVis incorporates the attention mechanism into the inference framework. Attention can indicate the relative importance of data features for a dataset and provide fine-grained explainability. Our extensive evaluations through quantitative metric evaluations, case studies, and user interviews demonstrate the effectiveness of AdaVis.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Songheng Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yong Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Haotian Li"},{"affiliations":"","email":"","is_corresponding":false,"name":"Huamin Qu"}],"award":"","doi":"10.1109/TVCG.2023.3316469","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233316469","image_caption":"","keywords":["Visualization Recommendation, Logical Reasoning, Data Visualization, Knowledge Graph"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233322372","abstract":"Visualization linting is a proven effective tool in assisting users to follow established visualization guidelines. Despite its success, visualization linting for choropleth maps, one of the most popular visualizations on the internet, has yet to be investigated. In this paper, we present GeoLinter, a linting framework for choropleth maps that assists in creating accurate and robust maps. Based on a set of design guidelines and metrics drawing upon a collection of best practices from the cartographic literature, GeoLinter detects potentially suboptimal design decisions and provides further recommendations on design improvement with explanations at each step of the design process. We perform a validation study to evaluate the proposed framework's functionality with respect to identifying and fixing errors and apply its results to improve the robustness of GeoLinter. Finally, we demonstrate the effectiveness of the GeoLinter - validated through empirical studies - by applying it to a series of case studies using real-world datasets.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Fan Lei"},{"affiliations":"","email":"","is_corresponding":true,"name":"Arlen Fan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alan M. MacEachren"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ross Maciejewski"}],"award":"","doi":"10.1109/TVCG.2023.3322372","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233322372","image_caption":"","keywords":["Data visualization , Image color analysis , Geology , Recommender systems , Guidelines , Bars , Visualization Author Keywords: Automated visualization design , choropleth maps , visualization linting , visualization recommendation"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"GeoLinter: A Linting Framework for Choropleth Maps","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233326698","abstract":"Researchers have derived many theoretical models for specifying users\u2019 insights as they interact with a visualization system. These representations are essential for understanding the insight discovery process, such as when inferring user interaction patterns that lead to insight or assessing the rigor of reported insights. However, theoretical models can be difficult to apply to existing tools and user studies, often due to discrepancies in how insight and its constituent parts are defined. This paper calls attention to the consistent structures that recur across the visualization literature and describes how they connect multiple theoretical representations of insight. We synthesize a unified formalism for insights using these structures, enabling a wider audience of researchers and developers to adopt the corresponding models. Through a series of theoretical case studies, we use our formalism to compare and contrast existing theories, revealing interesting research challenges in reasoning about a user's domain knowledge and leveraging synergistic approaches in data mining and data management research.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Leilani Battle"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alvitta Ottley"}],"award":"","doi":"10.1109/TVCG.2023.3326698","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233326698","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"What Do We Mean When We Say \u201cInsight\u201d? A Formal Synthesis of Existing Theory","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233330262","abstract":"This paper presents a computational framework for the concise encoding of an ensemble of persistence diagrams, in the form of weighted Wasserstein barycenters [100], [102] of a dictionary of atom diagrams. We introduce a multi-scale gradient descent approach for the efficient resolution of the corresponding minimization problem, which interleaves the optimization of the barycenter weights with the optimization of the atom diagrams. Our approach leverages the analytic expressions for the gradient of both sub-problems to ensure fast iterations and it additionally exploits shared-memory parallelism. Extensive experiments on public ensembles demonstrate the efficiency of our approach, with Wasserstein dictionary computations in the orders of minutes for the largest examples. We show the utility of our contributions in two applications. First, we apply Wassserstein dictionaries to data reduction and reliably compress persistence diagrams by concisely representing them with their weights in the dictionary. Second, we present a dimensionality reduction framework based on a Wasserstein dictionary defined with a small number of atoms (typically three) and encode the dictionary as a low dimensional simplex embedded in a visual space (typically in 2D). In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Keanu Sisouk"},{"affiliations":"","email":"","is_corresponding":false,"name":"Julie Delon"},{"affiliations":"","email":"","is_corresponding":true,"name":"Julien Tierny"}],"award":"","doi":"10.1109/TVCG.2023.3330262","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233330262","image_caption":"","keywords":["Topological data analysis, ensemble data, persistence diagrams"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Wasserstein Dictionaries of Persistence Diagrams","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233332511","abstract":"We present Submerse, an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Saeed Boorboor"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yoonsang Kim"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ping Hu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Josef Moses"},{"affiliations":"","email":"","is_corresponding":false,"name":"Brian Colle"},{"affiliations":"","email":"","is_corresponding":false,"name":"Arie E. Kaufman"}],"award":"","doi":"10.1109/TVCG.2023.3332511","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233332511","image_caption":"","keywords":["Camera navigation, flooding simulation visualization, immersive visualization, mixed reality"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233337173","abstract":"Visualization design studies bring together visualization researchers and domain experts to address yet unsolved data analysis challenges stemming from the needs of the domain experts. Typically, the visualization researchers lead the design study process and implementation of any visualization solutions. This setup leverages the visualization researchers' knowledge of methodology, design, and programming, but the availability to synchronize with the domain experts can hamper the design process. We consider an alternative setup where the domain experts take the lead in the design study, supported by the visualization experts. In this study, the domain experts are computer architecture experts who simulate and analyze novel computer chip designs. These chips rely on a Network-on-Chip (NOC) to connect components. The experts want to understand how the chip designs perform and what in the design led to their performance. To aid this analysis, we develop Vis4Mesh, a visualization system that provides spatial, temporal, and architectural context to simulated NOC behavior. Integration with an existing computer architecture visualization tool enables architects to perform deep-dives into specific architecture component behavior. We validate Vis4Mesh through a case study and a user study with computer architecture researchers. We reflect on our design and process, discussing advantages, disadvantages, and guidance for engaging in a domain expert-led design studies.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Shaoyu Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hang Yan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Katherine E. Isaacs"},{"affiliations":"","email":"","is_corresponding":true,"name":"Yifan Sun"}],"award":"","doi":"10.1109/TVCG.2023.3337173","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233337173","image_caption":"","keywords":["Data Visualization, Design Study, Network-on-Chip, Performance Analysis"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233322898","abstract":"Visual and interactive machine learning systems (IML) are becoming ubiquitous as they empower individuals with varied machine learning expertise to analyze data. However, it remains complex to align interactions with visual marks to a user\u2019s intent for steering machine learning models. We explore using data and visual design probes to elicit users\u2019 desired interactions to steer ML models via visual encodings within IML interfaces. We conducted an elicitation study with 20 data analysts with varying expertise in ML. We summarize our findings as pairs of target-interaction, which we compare to prior systems to assess the utility of the probes. We additionally surfaced insights about factors influencing how and why participants chose to interact with visual encodings, including refraining from interacting. Finally, we reflect on the value of gathering such formative empirical evidence via data and visual design probes ahead of developing IML prototypes. ","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Anamaria Crisan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Maddie Shang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Eric Brochu"}],"award":"","doi":"10.1109/TVCG.2023.3322898","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233322898","image_caption":"","keywords":["Design Probes, Interactive Machine Learning, Model Steering, Semantic Interaction"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Eliciting Model Steering Interactions from Users via Data and Visual Design Probes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233338451","abstract":"This paper investigates the role of text in visualizations, specifically the impact of text position, semantic content, and biased wording. Two empirical studies were conducted based on two tasks (predicting data trends and appraising bias) using two visualization types (bar and line charts). While the addition of text had a minimal effect on how people perceive data trends, there was a significant impact on how biased they perceive the authors to be. This finding revealed a relationship between the degree of bias in textual information and the perception of the authors' bias. Exploratory analyses support an interaction between a person's prediction and the degree of bias they perceived. This paper also develops a crowdsourced method for creating chart annotations that range from neutral to highly biased. This research highlights the need for designers to mitigate potential polarization of readers' opinions based on how authors' ideas are expressed.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Chase Stokes"},{"affiliations":"","email":"","is_corresponding":false,"name":"Cindy Xiong Bearfield"},{"affiliations":"","email":"","is_corresponding":false,"name":"Marti Hearst"}],"award":"","doi":"10.1109/TVCG.2023.3338451","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233338451","image_caption":"","keywords":["Visualization, text, annotation, perceived bias, judgment, prediction"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233337642","abstract":"Molecular docking is a key technique in various fields like structural biology, medicinal chemistry, and biotechnology. It is widely used for virtual screening during drug discovery, computer-assisted drug design, and protein engineering. A general molecular docking process consists of the target and ligand selection, their preparation, and the docking process itself, followed by the evaluation of the results. However, the most commonly used docking software provides no or very basic evaluation possibilities. Scripting and external molecular viewers are often used, which are not designed for an efficient analysis of docking results. Therefore, we developed InVADo, a comprehensive interactive visual analysis tool for large docking data. It consists of multiple linked 2D and 3D views. It filters and spatially clusters the data, and enriches it with post-docking analysis results of protein-ligand interactions and functional groups, to enable well-founded decision-making. In an exemplary case study, domain experts confirmed that InVADo facilitates and accelerates the analysis workflow. They rated it as a convenient, comprehensive, and feature-rich tool, especially useful for virtual screening.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Marco Sch\u00e4fer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nicolas Brich"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jan By\u0161ka"},{"affiliations":"","email":"","is_corresponding":false,"name":"S\u00e9rgio M. Marques"},{"affiliations":"","email":"","is_corresponding":false,"name":"David Bedn\u00e1\u0159"},{"affiliations":"","email":"","is_corresponding":false,"name":"Philipp Thiel"},{"affiliations":"","email":"","is_corresponding":false,"name":"Barbora Kozl\u00edkov\u00e1"},{"affiliations":"","email":"","is_corresponding":true,"name":"Michael Krone"}],"award":"","doi":"10.1109/TVCG.2023.3337642","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233337642","image_caption":"","keywords":["Molecular Docking, AutoDock, Virtual Screening, Visual Analysis, Visualization, Clustering, Protein-Ligand Interaction."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"InVADo: Interactive Visual Analysis of Molecular Docking Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233345373","abstract":"Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for compressing large-scale time-varying data. The approach consists of two stages: spatial compression and model aggregation. In the first stage, each time step is compressed using an implicit neural representation with bottleneck layers and features of interest preservation-based sampling. In the second stage, we utilize an offline knowledge distillation algorithm to extract knowledge from the trained models and aggregate it into a single model. We evaluated our approach on a variety of time-varying volumetric data sets. Both quantitative and qualitative results, such as PSNR, LPIPS, and rendered images, demonstrate that KD-INR surpasses the state-of-the-art approaches, including learning-based (i.e., CoordNet, NeurComp, and SIREN) and lossy compression (i.e., SZ3, ZFP, and TTHRESH) methods, at various compression ratios ranging from hundreds to ten thousand.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Jun Han"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hao Zheng"},{"affiliations":"","email":"","is_corresponding":false,"name":"Change Bi"}],"award":"","doi":"10.1109/TVCG.2023.3345373","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233345373","image_caption":"","keywords":["Time-varying data compression, implicit neural representation, knowledge distillation, volume visualization."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233332999","abstract":"Quantum computing offers significant speedup compared to classical computing, which has led to a growing interest among users in learning and applying quantum computing across various applications. However, quantum circuits, which are fundamental for implementing quantum algorithms, can be challenging for users to understand due to their underlying logic, such as the temporal evolution of quantum states and the effect of quantum amplitudes on the probability of basis quantum states. To fill this research gap, we propose QuantumEyes, an interactive visual analytics system to enhance the interpretability of quantum circuits through both global and local levels. For the global-level analysis, we present three coupled visualizations to delineate the changes of quantum states and the underlying reasons: a Probability Summary View to overview the probability evolution of quantum states; a State Evolution View to enable an in-depth analysis of the influence of quantum gates on the quantum states; a Gate Explanation View to show the individual qubit states and facilitate a better understanding of the effect of quantum gates. For the local-level analysis, we design a novel geometrical visualization dandelion chart to explicitly reveal how the quantum amplitudes affect the probability of the quantum state. We thoroughly evaluated QuantumEyes as well as the novel dandelion chart integrated into it through two case studies on different types of quantum algorithms and in-depth expert interviews with 12 domain experts. The results demonstrate the effectiveness and usability of our approach in enhancing the interpretability of quantum circuits.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Shaolun Ruan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Qiang Guan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Paul Griffin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ying Mao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yong Wang"}],"award":"","doi":"10.1109/TVCG.2023.3332999","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233332999","image_caption":"","keywords":["Data visualization, design study, interpretability, quantum computing."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"QuantumEyes: Towards Better Interpretability of Quantum Circuits","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233334755","abstract":"This paper presents a computational framework for the Wasserstein auto-encoding of merge trees (MT-WAE), a novel extension of the classical auto-encoder neural network architecture to the Wasserstein metric space of merge trees. In contrast to traditional auto-encoders which operate on vectorized data, our formulation explicitly manipulates merge trees on their associated metric space at each layer of the network, resulting in superior accuracy and interpretability. Our novel neural network approach can be interpreted as a non-linear generalization of previous linear attempts [79] at merge tree encoding. It also trivially extends to persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our algorithms, with MT-WAE computations in the orders of minutes on average. We show the utility of our contributions in two applications adapted from previous work on merge tree encoding [79]. First, we apply MT-WAE to merge tree compression, by concisely representing them with their coordinates in the final layer of our auto-encoder. Second, we document an application to dimensionality reduction, by exploiting the latent space of our auto-encoder, for the visual analysis of ensemble data. We illustrate the versatility of our framework by introducing two penalty terms, to help preserve in the latent space both the Wasserstein distances between merge trees, as well as their clusters. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used for reproducibility.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Mathieu Pont"},{"affiliations":"","email":"","is_corresponding":true,"name":"Julien Tierny"}],"award":"","doi":"10.1109/TVCG.2023.3334755","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233334755","image_caption":"","keywords":["Topological data analysis, ensemble data, persistence diagrams, merge trees, auto-encoders, neural networks"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233340770","abstract":"We present VoxAR, a method to facilitate an effective visualization of volume-rendered objects in optical see-through head-mounted displays (OST-HMDs). The potential of augmented reality (AR) to integrate digital information into the physical world provides new opportunities for visualizing and interpreting scientific data. However, a limitation of OST-HMD technology is that rendered pixels of a virtual object can interfere with the colors of the real-world, making it challenging to perceive the augmented virtual information accurately. We address this challenge in a two-step approach. First, VoxAR determines an appropriate placement of the volume-rendered object in the real-world scene by evaluating a set of spatial and environmental objectives, managed as user-selected preferences and pre-defined constraints. We achieve a real-time solution by implementing the objectives using a GPU shader language. Next, VoxAR adjusts the colors of the input transfer function (TF) based on the real-world placement region. Specifically, we introduce a novel optimization method that adjusts the TF colors such that the resulting volume-rendered pixels are discernible against the background and the TF maintains the perceptual mapping between the colors and data intensity values. Finally, we present an assessment of our approach through objective evaluations and subjective user studies.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Saeed Boorboor"},{"affiliations":"","email":"","is_corresponding":false,"name":"Matthew S. Castellana"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yoonsang Kim"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zhutian Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Johanna Beyer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hanspeter Pfister"},{"affiliations":"","email":"","is_corresponding":false,"name":"Arie E. Kaufman"}],"award":"","doi":"10.1109/TVCG.2023.3340770","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233340770","image_caption":"","keywords":["Adaptive Visualization, Situated Visualization, Augmented Reality, Volume Rendering"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233345340","abstract":"Label quality issues, such as noisy labels and imbalanced class distributions, have negative effects on model performance. Automatic reweighting methods identify problematic samples with label quality issues by recognizing their negative effects on validation samples and assigning lower weights to them. However, these methods fail to achieve satisfactory performance when the validation samples are of low quality. To tackle this, we develop Reweighter, a visual analysis tool for sample reweighting. The reweighting relationships between validation samples and training samples are modeled as a bipartite graph. Based on this graph, a validation sample improvement method is developed to improve the quality of validation samples. Since the automatic improvement may not always be perfect, a co-cluster-based bipartite graph visualization is developed to illustrate the reweighting relationships and support the interactive adjustments to validation samples and reweighting results. The adjustments are converted into the constraints of the validation sample improvement method to further improve validation samples. We demonstrate the effectiveness of Reweighter in improving reweighting results through quantitative evaluation and two case studies.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Weikai Yang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yukai Guo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jing Wu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zheng Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lan-Zhe Guo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yu-Feng Li"},{"affiliations":"","email":"","is_corresponding":false,"name":"Shixia Liu"}],"award":"","doi":"10.1109/TVCG.2023.3345340","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233345340","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Interactive Reweighting for Mitigating Label Quality Issues","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233323150","abstract":"We examined user preferences to combine multiple interaction modalities for collaborative interaction with data shown on large vertical displays. Large vertical displays facilitate visual data exploration and allow the use of diverse interaction modalities by multiple users at different distances from the screen. Yet, how to offer multiple interaction modalities is a non-trivial problem. We conducted an elicitation study with 20 participants that generated 1015 interaction proposals combining touch, speech, pen, and mid-air gestures. Given the opportunity to interact using these four 13 modalities, participants preferred speech interaction in 10 of 15 14 low-level tasks and direct manipulation for straightforward tasks 15 such as showing a tooltip or selecting. In contrast to previous work, 16 participants most favored unimodal and personal interactions. We 17 identified what we call collaborative synonyms among their interaction proposals and found that pairs of users collaborated either unimodally and simultaneously or multimodally and sequentially. We provide insights into how end-users associate visual exploration tasks with certain modalities and how they collaborate at different interaction distances using specific interaction modalities. The supplemental material is available at https://osf.io/m8zuh.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Gabriela Molina Le\u00f3n"},{"affiliations":"","email":"","is_corresponding":false,"name":"Petra Isenberg"},{"affiliations":"","email":"","is_corresponding":false,"name":"Andreas Breiter"}],"award":"","doi":"10.1109/TVCG.2023.3323150","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233323150","image_caption":"","keywords":["Multimodal interaction, collaborative work, large vertical displays, elicitation study, spatio-temporal data"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233334513","abstract":"Data integration is often performed to consolidate information from multiple disparate data sources during visual data analysis. However, integration operations are usually separate from visual analytics operations such as encode and filter in both interface design and empirical research. We conducted a preliminary user study to investigate whether and how data integration should be incorporated directly into the visual analytics process. We used two interface alternatives featuring contrasting approaches to the data preparation and analysis workflow: manual file-based ex-situ integration as a separate step from visual analytics operations; and automatic UI-based in-situ integration merged with visual analytics operations. Participants were asked to complete specific and free-form tasks with each interface, browsing for patterns, generating insights, and summarizing relationships between attributes distributed across multiple files. Analyzing participants' interactions and feedback, we found both task completion time and total interactions to be similar across interfaces and tasks, as well as unique integration strategies between interfaces and emergent behaviors related to satisficing and cognitive bias. Participants' time spent and interactions revealed that in-situ integration enabled users to spend more time on analysis tasks compared with ex-situ integration. Participants' integration strategies and analytical behaviors revealed differences in interface usage for generating and tracking hypotheses and insights. With these results, we synthesized preliminary guidelines for designing future visual analytics interfaces that can support integrating attributes throughout an active analysis process.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Adam Coscia"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ashley Suh"},{"affiliations":"","email":"","is_corresponding":false,"name":"Remco Chang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alex Endert"}],"award":"","doi":"10.1109/TVCG.2023.3334513","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233334513","image_caption":"","keywords":["Visual analytics, Data integration, User interface design, Integration strategies, Analytical behaviors."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Preliminary Guidelines For Combining Data Integration and Visual Data Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233341990","abstract":"We report on challenges and considerations for supporting design processes for visualizations in motion embedded in sports videos. We derive our insights from analyzing swimming race visualizations and motion-related data, building a technology probe, as well as a study with designers. Understanding how to design situated visualizations in motion is important for a variety of contexts. Competitive sports coverage, in particular, increasingly includes information on athlete or team statistics and records. Although moving visual representations attached to athletes or other targets are starting to appear, systematic investigations on how to best support their design process in the context of sports videos are still missing. Our work makes several contributions in identifying opportunities for visualizations to be added to swimming competition coverage but, most importantly, in identifying requirements and challenges for designing situated visualizations in motion. Our investigations include the analysis of a survey with swimming enthusiasts on their motion-related information needs, an ideation workshop to collect designs and elicit design challenges, the design of a technology probe that allows to create embedded visualizations in motion based on real data, and an evaluation with visualization designers that aimed to understand the benefits of designing directly on videos.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Lijie Yao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Romain Vuillemot"},{"affiliations":"","email":"","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":"","email":"","is_corresponding":false,"name":"Petra Isenberg"}],"award":"","doi":"10.1109/TVCG.2023.3341990","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233341990","image_caption":"","keywords":["Data visualization, Sports, Videos, Probes, Surveys, Authoring systems, Games, Design framework, Embedded visualization, Sports analytics, Visualization in motion"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233346713","abstract":"Recent growth in the popularity of large language models has led to their increased usage for summarizing, predicting, and generating text, making it vital to help researchers and engineers understand how and why they work. We present KnowledgeVIS , a human-in-the-loop visual analytics system for interpreting language models using fill-in-the-blank sentences as prompts. By comparing predictions between sentences, KnowledgeVIS reveals learned associations that intuitively connect what language models learn during training to natural language tasks downstream, helping users create and test multiple prompt variations, analyze predicted words using a novel semantic clustering technique, and discover insights using interactive visualizations. Collectively, these visualizations help users identify the likelihood and uniqueness of individual predictions, compare sets of predictions between prompts, and summarize patterns and relationships between predictions across all prompts. We demonstrate the capabilities of KnowledgeVIS with feedback from six NLP experts as well as three different use cases: (1) probing biomedical knowledge in two domain-adapted models; and (2) evaluating harmful identity stereotypes and (3) discovering facts and relationships between three general-purpose models.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Adam Coscia"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alex Endert"}],"award":"","doi":"10.1109/TVCG.2023.3346713","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233346713","image_caption":"","keywords":["Visual analytics, language models, prompting, interpretability, machine learning."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243350076","abstract":"Ensembles of contours arise in various applications like simulation, computer-aided design, and semantic segmentation. Uncovering ensemble patterns and analyzing individual members is a challenging task that suffers from clutter. Ensemble statistical summarization can alleviate this issue by permitting analyzing ensembles' distributional components like the mean and median, confidence intervals, and outliers. Contour boxplots, powered by Contour Band Depth (CBD), are a popular non-parametric ensemble summarization method that benefits from CBD's generality, robustness, and theoretical properties. In this work, we introduce Inclusion Depth (ID), a new notion of contour depth with three defining characteristics. First, ID is a generalization of functional Half-Region Depth, which offers several theoretical guarantees. Second, ID relies on a simple principle: the inside/outside relationships between contours. This facilitates implementing ID and understanding its results. Third, the computational complexity of ID scales quadratically in the number of members of the ensemble, improving CBD's cubic complexity. This also in practice speeds up the computation enabling the use of ID for exploring large contour ensembles or in contexts requiring multiple depth evaluations like clustering. In a series of experiments on synthetic data and case studies with meteorological and segmentation data, we evaluate ID's performance and demonstrate its capabilities for the visual analysis of contour ensembles.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Nicolas F. Chaves-de-Plaza"},{"affiliations":"","email":"","is_corresponding":false,"name":"Prerak Mody"},{"affiliations":"","email":"","is_corresponding":false,"name":"Marius Staring"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ren\u00e9 van Egmond"},{"affiliations":"","email":"","is_corresponding":false,"name":"Anna Vilanova"},{"affiliations":"","email":"","is_corresponding":false,"name":"Klaus Hildebrandt"}],"award":"","doi":"10.1109/TVCG.2024.3350076","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243350076","image_caption":"","keywords":["Uncertainty visualization, contours, ensemble summarization, depth statistics."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Inclusion Depth for Contour Ensembles","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243354561","abstract":"Interactive visualization can support fluid exploration but is often limited to predetermined tasks. Scripting can support a vast range of queries but may be more cumbersome for free-form exploration. Embedding interactive visualization in scripting environments, such as computational notebooks, provides an opportunity to leverage the strengths of both direct manipulation and scripting. We investigate interactive visualization design methodology, choices, and strategies under this paradigm through a design study of calling context trees used in performance analysis, a field which exemplifies typical exploratory data analysis workflows with big data and hard to define problems. We first produce a formal task analysis assigning tasks to graphical or scripting contexts based on their specificity, frequency, and suitability. We then design a notebook-embedded interactive visualization and validate it with intended users. In a follow-up study, we present participants with multiple graphical and scripting interaction modes to elicit feedback about notebook-embedded visualization design, finding consensus in support of the interaction model. We report and reflect on observations regarding the process and design implications for combining visualization and scripting in notebooks.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Connor Scully-Allison"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ian Lumsden"},{"affiliations":"","email":"","is_corresponding":false,"name":"Katy Williams"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jesse Bartels"},{"affiliations":"","email":"","is_corresponding":false,"name":"Michela Taufer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Stephanie Brink"},{"affiliations":"","email":"","is_corresponding":false,"name":"Abhinav Bhatele"},{"affiliations":"","email":"","is_corresponding":false,"name":"Olga Pearce"},{"affiliations":"","email":"","is_corresponding":false,"name":"Katherine E. Isaacs"}],"award":"","doi":"10.1109/TVCG.2024.3354561","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243354561","image_caption":"","keywords":["Exploratory Data Analysis, Interactive Data Analysis, Computational Notebooks, Hybrid Visualization-Scripting, Visualization Design"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243355884","abstract":"News articles containing data visualizations play an important role in informing the public on issues ranging from public health to politics. Recent research on the persuasive appeal of data visualizations suggests that prior attitudes can be notoriously difficult to change. Inspired by an NYT article, we designed two experiments to evaluate the impact of elicitation and contrasting narratives on attitude change, recall, and engagement. We hypothesized that eliciting prior beliefs leads to more elaborative thinking that ultimately results in higher attitude change, better recall, and engagement. Our findings revealed that visual elicitation leads to higher engagement in terms of feelings of surprise. While there is an overall attitude change across all experiment conditions, we did not observe a significant effect of belief elicitation on attitude change. With regard to recall error, while participants in the draw trend elicitation exhibited significantly lower recall error than participants in the categorize trend condition, we found no significant difference in recall error when comparing elicitation conditions to no elicitation. In a follow-up study, we added contrasting narratives with the purpose of making the main visualization (communicating data on the focal issue) appear strikingly different. Compared to the results of Study 1, we found that contrasting narratives improved engagement in terms of surprise and interest but interestingly resulted in higher recall error and no significant change in attitude. We discuss the effects of elicitation and contrasting narratives in the context of topic involvement and the strengths of temporal trends encoded in the data visualization.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Milad Rogha"},{"affiliations":"","email":"","is_corresponding":false,"name":"Subham Sah"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alireza Karduni"},{"affiliations":"","email":"","is_corresponding":false,"name":"Douglas Markant"},{"affiliations":"","email":"","is_corresponding":false,"name":"Wenwen Dou"}],"award":"","doi":"10.1109/TVCG.2024.3355884","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243355884","image_caption":"","keywords":["Data Visualization, Market Research, Visualization, Uncertainty, Data Models, Correlation, Attitude Control, Belief Elicitation, Visual Elicitation, Data Visualization, Contrasting Narratives"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233337396","abstract":"Partitioning a dynamic network into subsets (i.e., snapshots) based on disjoint time intervals is a widely used technique for understanding how structural patterns of the network evolve. However, selecting an appropriate time window (i.e., slicing a dynamic network into snapshots) is challenging and time-consuming, often involving a trial-and-error approach to investigating underlying structural patterns. To address this challenge, we present MoNetExplorer, a novel interactive visual analytics system that leverages temporal network motifs to provide recommendations for window sizes and support users in visually comparing different slicing results. MoNetExplorer provides a comprehensive analysis based on window size, including (1) a temporal overview to identify the structural information, (2) temporal network motif composition, and (3) node-link-diagram-based details to enable users to identify and understand structural patterns at various temporal resolutions. To demonstrate the effectiveness of our system, we conducted a case study with network researchers using two real-world dynamic network datasets. Our case studies show that the system effectively supports users to gain valuable insights into the temporal and structural aspects of dynamic networks.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Seokweon Jung"},{"affiliations":"","email":"","is_corresponding":false,"name":"DongHwa Shin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hyeon Jeon"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kiroong Choe"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jinwook Seo"}],"award":"","doi":"10.1109/TVCG.2023.3337396","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233337396","image_caption":"","keywords":["Visual analytics, Measurement, Size measurement, Windows, Time measurement, Data visualization, Task analysis, Visual analytics, Dynamic networks, Temporal network motifs, Interactive network slicing"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243358919","abstract":"We conduct two in-lab experiments (N=93) to evaluate the effectiveness of Gantt charts, extended Gantt charts, and stringline charts for visualizing fixed-order event sequence data. We first formulate five types of event sequences and define three types of sequence elements: point events, interval events, and the temporal gaps between them. Our two experiments focus on event sequences with a pre-defined, fixed order, and measure task error rates and completion time. The first experiment shows single sequences and assesses the three charts' performance in comparing event duration or gap. The second experiment shows multiple sequences and evaluates how well the charts reveal temporal patterns. The results suggest that when visualizing single fixed-order event sequences, 1) Gantt and extended Gantt charts lead to comparable error rates in the duration-comparing task; 2) Gantt charts exhibit either shorter or equal completion time than extended Gantt charts; 3) both Gantt and extended Gantt charts demonstrate shorter completion times than stringline charts; 4) however, stringline charts outperform the other two charts with fewer errors in the comparing task when event type counts are high. Additionally, when visualizing multiple point-based fixed-order event sequences, stringline charts require less time than Gantt charts for people to find temporal patterns. Based on these findings, we discuss design opportunities for visualizing fixed-order event sequences and discuss future avenues for optimizing these charts.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Junxiu Tang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Fumeng Yang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiang Wu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yifang Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiayi Zhou"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xiwen Cai"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lingyun Yu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yingcai Wu"}],"award":"","doi":"10.1109/TVCG.2024.3358919","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243358919","image_caption":"","keywords":["Gantt chart, stringline chart, Marey's graph, event sequence, empirical study"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243364388","abstract":"Seasonal-trend decomposition based on loess (STL) is a powerful tool to explore time series data visually. In this paper, we present an extension of STL to uncertain data, named uncertainty-aware STL (UASTL). Our method propagates multivariate Gaussian distributions mathematically exactly through the entire analysis and visualization pipeline. Thereby, stochastic quantities shared between the components of the decomposition are preserved. Moreover, we present application scenarios with uncertainty modeling based on Gaussian processes, e.g., data with uncertain areas or missing values. Besides these mathematical results and modeling aspects, we introduce visualization techniques that address the challenges of uncertainty visualization and the problem of visualizing highly correlated components of a decomposition. The global uncertainty propagation enables the time series visualization with STL-consistent samples, the exploration of correlation between and within decomposition's components, and the analysis of the impact of varying uncertainty. Finally, we show the usefulness of UASTL and the importance of uncertainty visualization with several examples. Thereby, a comparison with conventional STL is performed.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Tim Krake"},{"affiliations":"","email":"","is_corresponding":false,"name":"Daniel Kl\u00f6tzl"},{"affiliations":"","email":"","is_corresponding":false,"name":"David H\u00e4gele"},{"affiliations":"","email":"","is_corresponding":false,"name":"Daniel Weiskopf"}],"award":"","doi":"10.1109/TVCG.2024.3364388","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243364388","image_caption":"","keywords":["- I.6.9.g Visualization techniques and methodologies < I.6.9 Visualization < I.6 Simulation, Modeling, and Visualization < I Compu - G.3 Probability and Statistics < G Mathematics of Computing - G.3.n Statistical computing < G.3 Probability and Statistics < G Mathematics of Computing - G.3.p Stochastic processes < G.3 Probability and Statistics < G Mathematics of Computing"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243364841","abstract":"The need to understand the structure of hierarchical or high-dimensional data is present in a variety of fields. Hyperbolic spaces have proven to be an important tool for embedding computations and analysis tasks as their non-linear nature lends itself well to tree or graph data. Subsequently, they have also been used in the visualization of high-dimensional data, where they exhibit increased embedding performance. However, none of the existing dimensionality reduction methods for embedding into hyperbolic spaces scale well with the size of the input data. That is because the embeddings are computed via iterative optimization schemes and the computation cost of every iteration is quadratic in the size of the input. Furthermore, due to the non-linear nature of hyperbolic spaces, Euclidean acceleration structures cannot directly be translated to the hyperbolic setting. This paper introduces the first acceleration structure for hyperbolic embeddings, building upon a polar quadtree. We compare our approach with existing methods and demonstrate that it computes embeddings of similar quality in significantly less time. Implementation and scripts for the experiments can be found at this https URL.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Martin Skrodzki"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hunter van Geffen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nicolas F. Chaves-de-Plaza"},{"affiliations":"","email":"","is_corresponding":false,"name":"Thomas H\u00f6llt"},{"affiliations":"","email":"","is_corresponding":false,"name":"Elmar Eisemann"},{"affiliations":"","email":"","is_corresponding":false,"name":"Klaus Hildebrandt"}],"award":"","doi":"10.1109/TVCG.2024.3364841","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243364841","image_caption":"","keywords":["Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Quantitative Methods (q-bio.QM); Machine Learning (stat.ML) Dimensionality reduction, t-SNE, hyperbolic embedding, acceleration structure"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Accelerating hyperbolic t-SNE","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243365089","abstract":"Implicit Neural representations (INRs) are widely used for scientific data reduction and visualization by modeling the function that maps a spatial location to a data value. Without any prior knowledge about the spatial distribution of values, we are forced to sample densely from INRs to perform visualization tasks like iso-surface extraction which can be very computationally expensive. Recently, range analysis has shown promising results in improving the efficiency of geometric queries, such as ray casting and hierarchical mesh extraction, on INRs for 3D geometries by using arithmetic rules to bound the output range of the network within a spatial region. However, the analysis bounds are often too conservative for complex scientific data. In this paper, we present an improved technique for range analysis by revisiting the arithmetic rules and analyzing the probability distribution of the network output within a spatial region. We model this distribution efficiently as a Gaussian distribution by applying the central limit theorem. Excluding low probability values, we are able to tighten the output bounds, resulting in a more accurate estimation of the value range, and hence more accurate identification of iso-surface cells and more efficient iso-surface extraction on INRs. Our approach demonstrates superior performance in terms of the iso-surface extraction time on four datasets compared to the original range analysis method and can also be generalized to other geometric query tasks.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Haoyu Li"},{"affiliations":"","email":"","is_corresponding":false,"name":"Han-Wei Shen"}],"award":"","doi":"10.1109/TVCG.2024.3365089","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243365089","image_caption":"","keywords":["Iso-surface extraction, implicit neural representation, uncertainty propagation, affine arithmetic."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233346641","abstract":"Currently, growing data sources and long-running algorithms impede user attention and interaction with visual analytics applications. Progressive visualization (PV) and visual analytics (PVA) alleviate this problem by allowing immediate feedback and interaction with large datasets and complex computations, avoiding waiting for complete results by using partial results improving with time. Yet, creating a progressive visualization requires more effort than a regular visualization but also opens up new possibilities, such as steering the computations towards more relevant parts of the data, thus saving computational resources. However, there is currently no comprehensive overview of the design space for progressive visualization systems. We surveyed the related work of PV and derived a new taxonomy for progressive visualizations by systematically categorizing all PV publications that included visualizations with progressive features. Progressive visualizations can be categorized by well-known visualization taxonomies, but we also found that progressive visualizations can be distinguished by the way they manage their data processing, data domain, and visual update. Furthermore, we identified key properties such as uncertainty, steering, visual stability, and real-time processing that are significantly different with progressive applications. We also collected evaluation methodologies reported by the publications and conclude with statistical findings, research gaps, and open challenges. A continuously updated visual browser of the survey data is available at visualsurvey.net/pva.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Alex Ulmer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Marco Angelini"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jean-Daniel Fekete"},{"affiliations":"","email":"","is_corresponding":false,"name":"J\u00f6rn Kohlhammerm"},{"affiliations":"","email":"","is_corresponding":false,"name":"Thorsten May"}],"award":"","doi":"10.1109/TVCG.2023.3346641","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233346641","image_caption":"","keywords":["Data visualization, Convergence, Visual analytics, Taxonomy Surveys, Rendering (computer graphics), Task analysis, Progressive Visual Analytics, Progressive Visualization, Taxonomy, State-of-the-Art Report, Survey"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"A Survey on Progressive Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233287585","abstract":"Data visualization and journalism are deeply connected. From early infographics to recent data-driven storytelling, visualization has become an integrated part of contemporary journalism, primarily as a communication artifact to inform the general public. Data journalism, harnessing the power of data visualization, has emerged as a bridge between the growing volume of data and our society. Visualization research that centers around data storytelling has sought to understand and facilitate such journalistic endeavors. However, a recent metamorphosis in journalism has brought broader challenges and opportunities that extend beyond mere communication of data. We present this article to enhance our understanding of such transformations and thus broaden visualization research's scope and practical contribution to this evolving field. We first survey recent significant shifts, emerging challenges, and computational practices in journalism. We then summarize six roles of computing in journalism and their implications. Based on these implications, we provide propositions for visualization research concerning each role. Ultimately, by mapping the roles and propositions onto a proposed ecological model and contextualizing existing visualization research, we surface seven general topics and a series of research agendas that can guide future visualization research at this intersection.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yu Fu"},{"affiliations":"","email":"","is_corresponding":false,"name":"John Stasko"}],"award":"","doi":"10.1109/TVCG.2023.3287585","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233287585","image_caption":"","keywords":["Computational journalism, data visualization, data-driven storytelling, journalism"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243356566","abstract":"The increasing ubiquity of data in everyday life has elevated the importance of data literacy and accessible data representations, particularly for individuals with disabilities. While prior research predominantly focuses on the needs of the visually impaired, our survey aims to broaden this scope by investigating accessible data representations across a more inclusive spectrum of disabilities. After conducting a systematic review of 152 accessible data representation papers from ACM and IEEE databases, we found that roughly 78% of existing articles center on vision impairments. In this paper, we conduct a comprehensive review of the remaining 22% of papers focused on underrepresented disability communities. We developed categorical dimensions based on accessibility, visualization, and human-computer interaction to classify the papers. These dimensions include the community of focus, issues addressed, contribution type, study methods, participants, data type, visualization type, and data domain. Our work redefines accessible data representations by illustrating their application for disabilities beyond those related to vision. Building on our literature review, we identify and discuss opportunities for future research in accessible data representations. All supplemental materials are available at https://osf.io/ yv4xm/?view only=7b36a3fbf7a14b3888029966faa3def9.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Brianna L. Wimer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Laura South"},{"affiliations":"","email":"","is_corresponding":false,"name":"Keke Wu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Danielle Albers Szafir"},{"affiliations":"","email":"","is_corresponding":false,"name":"Michelle A. Borkin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ronald A. Metoyer"}],"award":"","doi":"10.1109/TVCG.2024.3356566","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243356566","image_caption":"","keywords":["Accessibility, Data Representations."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233306356","abstract":"A multitude of studies have been conducted on graph drawing, but many existing methods only focus on optimizing a single aesthetic aspect of graph layouts. There are a few existing methods that attempt to develop a flexible solution for optimizing different aesthetic aspects measured by different aesthetic criteria. Furthermore, thanks to the significant advance in deep learning techniques, several deep learning-based layout methods were proposed recently, which have demonstrated the advantages of the deep learning approaches for graph drawing. However, none of these existing methods can be directly applied to optimizing non-differentiable criteria without special accommodation. In this work, we propose a novel Generative Adversarial Network (GAN) based deep learning framework for graph drawing, called SmartGD, which can optimize any quantitative aesthetic goals even though they are non-differentiable. In the cases where the aesthetic goal is too abstract to be described mathematically, SmartGD can draw graphs in a similar style as a collection of good layout examples, which might be selected by humans based on the abstract aesthetic goal. To demonstrate the effectiveness and efficiency of SmartGD, we conduct experiments on minimizing stress, minimizing edge crossing, maximizing crossing angle, and a combination of multiple aesthetics. Compared with several popular graph drawing algorithms, the experimental results show that SmartGD achieves good performance both quantitatively and qualitatively.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Xiaoqi Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kevin Yen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yifan Hu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Han-Wei Shen"}],"award":"","doi":"10.1109/TVCG.2023.3306356","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233306356","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233346640","abstract":"Is it true that if citizens understand hurricane probabilities, they will make more rational decisions for evacuation? Finding answers to such questions is not straightforward in the literature because the terms \u201c judgment \u201d and \u201c decision making \u201d are often used interchangeably. This terminology conflation leads to a lack of clarity on whether people make suboptimal decisions because of inaccurate judgments of information conveyed in visualizations or because they use alternative yet currently unknown heuristics. To decouple judgment from decision making, we review relevant concepts from the literature and present two preregistered experiments (N=601) to investigate if the task (judgment vs. decision making), the scenario (sports vs. humanitarian), and the visualization (quantile dotplots, density plots, probability bars) affect accuracy. While experiment 1 was inconclusive, we found evidence for a difference in experiment 2. Contrary to our expectations and previous research, which found decisions less accurate than their direct-equivalent judgments, our results pointed in the opposite direction. Our findings further revealed that decisions were less vulnerable to status-quo bias, suggesting decision makers may disfavor responses associated with inaction. We also found that both scenario and visualization types can influence people's judgments and decisions. Although effect sizes are not large and results should be interpreted carefully, we conclude that judgments cannot be safely used as proxy tasks for decision making, and discuss implications for visualization research and beyond. Materials and preregistrations are available at https://osf.io/ufzp5/?view_only=adc0f78a23804c31bf7fdd9385cb264f.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Ba\u015fak Oral"},{"affiliations":"","email":"","is_corresponding":false,"name":"Pierre Dragicevic"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alexandru Telea"},{"affiliations":"","email":"","is_corresponding":false,"name":"Evanthia Dimara"}],"award":"","doi":"10.1109/TVCG.2023.3346640","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233346640","image_caption":"","keywords":["Data visualization, Task analysis, Decision making, Visualization, Bars, Sports, Terminology, Cognition, Decision Making, Judgment, Psychology, Visualization"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Decoupling Judgment and Decision Making: A Tale of Two Tails","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243372620","abstract":"Small multiples are a popular visualization method, displaying different views of a dataset using multiple frames, often with the same scale and axes. However, there is a need to address their potential constraints, especially in the context of human cognitive capacity limits. These limits dictate the maximum information our mind can process at once. We explore the issue of capacity limitation by testing competing theories that describe how the number of frames shown in a display, the scale of the frames, and time constraints impact user performance with small multiples of line charts in an energy grid scenario. In two online studies (Experiment 1 n = 141 and Experiment 2 n = 360) and a follow-up eye-tracking analysis (n=5),we found a linear decline in accuracy with increasing frames across seven tasks, which was not fully explained by differences in frame size, suggesting visual search challenges. Moreover, the studies demonstrate that highlighting specific frames can mitigate some visual search difficulties but, surprisingly, not eliminate them. This research offers insights into optimizing the utility of small multiples by aligning them with human limitations.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Helia Hosseinpour"},{"affiliations":"","email":"","is_corresponding":false,"name":"Laura E. Matzen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kristin M. Divis"},{"affiliations":"","email":"","is_corresponding":false,"name":"Spencer C. Castro"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lace Padilla"}],"award":"","doi":"10.1109/TVCG.2024.3372620","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243372620","image_caption":"","keywords":["Cognition, small multiples, time-series data"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243381453","abstract":"Scatterplots provide a visual representation of bivariate data (or 2D embeddings of multivariate data) that allows for effective analyses of data dependencies, clusters, trends, and outliers. Unfortunately, classical scatterplots suffer from scalability issues, since growing data sizes eventually lead to overplotting and visual clutter on a screen with a fixed resolution, which hinders the data analysis process. We propose an algorithm that compensates for irregular sample distributions by a smooth transformation of the scatterplot's visual domain. Our algorithm evaluates the scatterplot's density distribution to compute a regularization mapping based on integral images of the rasterized density function. The mapping preserves the samples' neighborhood relations. Few regularization iterations suffice to achieve a nearly uniform sample distribution that efficiently uses the available screen space. We further propose approaches to visually convey the transformation that was applied to the scatterplot and compare them in a user study. We present a novel parallel algorithm for fast GPU-based integral-image computation, which allows for integrating our de-cluttering approach into interactive visual data analysis systems.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Hennes Rave"},{"affiliations":"","email":"","is_corresponding":false,"name":"Vladimir Molchanov"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lars Linsen"}],"award":"","doi":"10.1109/TVCG.2024.3381453","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243381453","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"De-cluttering Scatterplots with Integral Images","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243382607","abstract":"Advanced manufacturing creates increasingly complex objects with material compositions that are often difficult to characterize by a single modality. Our domain scientists are going beyond traditional methods by employing both X-ray and neutron computed tomography to obtain complementary representations expected to better resolve material boundaries. However, the use of two modalities creates its own challenges for visualization, requiring either complex adjustments of multimodal transfer functions or the need for multiple views. Together with experts in nondestructive evaluation, we designed a novel interactive multimodal visualization approach to create a combined view of the co-registered X-ray and neutron acquisitions of industrial objects. Using an automatic topological segmentation of the bivariate histogram of X-ray and neutron values as a starting point, the system provides a simple yet effective interface to easily create, explore, and adjust a multimodal isualization. We propose a widget with simple brushing interactions that enables the user to quickly correct the segmented histogram results. Our semiautomated system enables domain experts to intuitively explore large multimodal datasets without the need for either advanced segmentation algorithms or knowledge of visualization echniques. We demonstrate our approach using synthetic examples, industrial phantom objects created to stress multimodal scanning techniques, and real-world objects, and we discuss expert feedback.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Huang, Xuan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Miao, Haichao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kim, Hyojin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Townsend, Andrew"},{"affiliations":"","email":"","is_corresponding":false,"name":"Champley, Kyle"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tringe, Joseph"},{"affiliations":"","email":"","is_corresponding":false,"name":"Pascucci, Valerio"},{"affiliations":"","email":"","is_corresponding":false,"name":"Bremer, Peer-Timo"}],"award":"","doi":"10.1109/TVCG.2024.3382607","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243382607","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243374571","abstract":"Visualization Recommendation Systems (VRSs) are a novel and challenging field of study aiming to help generate insightful visualizations from data and support non-expert users in information discovery. Among the many contributions proposed in this area, some systems embrace the ambitious objective of imitating human analysts to identify relevant relationships in data and make appropriate design choices to represent these relationships with insightful charts. We denote these systems as \"agnostic\" VRSs since they do not rely on human-provided constraints and rules but try to learn the task autonomously. Despite the high application potential of agnostic VRSs, their progress is hindered by several obstacles, including the absence of standardized datasets to train recommendation algorithms, the difficulty of learning design rules, and defining quantitative criteria for evaluating the perceptual effectiveness of generated plots. This paper summarizes the literature on agnostic VRSs and outlines promising future research directions.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Luca Podo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Bardh Prenkaj"},{"affiliations":"","email":"","is_corresponding":false,"name":"Paola Velardi"}],"award":"","doi":"10.1109/TVCG.2024.3374571","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243374571","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Agnostic Visual Recommendation Systems: Open Challenges and Future Directions","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243382760","abstract":"Time-stamped event sequences (TSEQs) are time-oriented data without value information, shifting the focus of users to the exploration of temporal event occurrences. TSEQs exist in application domains, such as sleeping behavior, earthquake aftershocks, and stock market crashes. Domain experts face four challenges, for which they could use interactive and visual data analysis methods. First, TSEQs can be large with respect to both the number of sequences and events, often leading to millions of events. Second, domain experts need validated metrics and features to identify interesting patterns. Third, after identifying interesting patterns, domain experts contextualize the patterns to foster sensemaking. Finally, domain experts seek to reduce data complexity by data simplification and machine learning support. We present IVESA, a visual analytics approach for TSEQs. It supports the analysis of TSEQs at the granularities of sequences and events, supported with metrics and feature analysis tools. IVESA has multiple linked views that support overview, sort+filter, comparison, details-on-demand, and metadata relation-seeking tasks, as well as data simplification through feature analysis, interactive clustering, filtering, and motif detection and simplification. We evaluated IVESA with three case studies and a user study with six domain experts working with six different datasets and applications. Results demonstrate the usability and generalizability of IVESA across applications and cases that had up to 1,000,000 events.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"J\u00fcrgen Bernard"},{"affiliations":"","email":"","is_corresponding":false,"name":"Clara-Maria Barth"},{"affiliations":"","email":"","is_corresponding":false,"name":"Eduard Cuba"},{"affiliations":"","email":"","is_corresponding":false,"name":"Andrea Meier"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yasara Peiris"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ben Shneiderman"}],"award":"","doi":"10.1109/TVCG.2024.3382760","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243382760","image_caption":"","keywords":["Time-Stamped Event Sequences, Time-Oriented Data, Visual Analytics, Data-First Design Study, Iterative Design, Visual Interfaces, User Evaluation"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Visual Analysis of Time-Stamped Event Sequences","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243385118","abstract":"Genomics is at the core of precision medicine, and there are high expectations on genomics-enabled improvement of patient outcomes in the years to come. Around the world, initiatives to increase the use of DNA sequencing in clinical routine are being deployed, such as the use of broad panels in the standard care for oncology patients. Such a development comes at the cost of increased demands on throughput in genomic data analysis. In this paper, we use the task of copy number variant (CNV) analysis as a context for exploring visualization concepts for clinical genomics. CNV calls are generated algorithmically, but time-consuming manual intervention is needed to separate relevant findings from irrelevant ones in the resulting large call candidate lists. We present a visualization environment, named Copycat, to support this review task in a clinical scenario. Key components are a scatter-glyph plot replacing the traditional list visualization, and a glyph representation designed for at-a-glance relevance assessments. Moreover, we present results from a formative evaluation of the prototype by domain specialists, from which we elicit insights to guide both prototype improvements and visualization for clinical genomics in general.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Emilia St\u00e5hlbom"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jesper Molin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Claes Lundstr\u00f6m"},{"affiliations":"","email":"","is_corresponding":false,"name":"Anders Ynnerman"}],"award":"","doi":"10.1109/TVCG.2024.3385118","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243385118","image_caption":"","keywords":["Visualization, genomics, copy number variants, clinical decision support, evaluation"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Visualization for diagnostic review of copy number variants in complex DNA sequencing data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243390219","abstract":"This system paper documents the technical foundations for the extension of the Topology ToolKit (TTK) to distributed-memory parallelism with the Message Passing Interface (MPI). While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a versatile approach (supporting both triangulated domains and regular grids) for the support of topological analysis pipelines, i.e. a sequence of topological algorithms interacting together, possibly on distinct numbers of processes. While developing this extension, we faced several algorithmic and software engineering challenges, which we document in this paper. We describe an MPI extension of TTK\u2019s data structure for triangulation representation and traversal, a central component to the global performance and generality of TTK\u2019s topological implementations. We also introduce an intermediate interface between TTK and MPI, both at the global pipeline level, and at the fine-grain algorithmic level. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Detailed performance analyses show that parallel efficiencies range from 20% to 80% (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (120 billion vertices) on a standard cluster with 64 nodes (for a total of 1536 cores). Finally, we provide a roadmap for the completion of TTK\u2019s MPI extension, along with generic recommendations for each algorithm communication category.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"E. Le Guillou"},{"affiliations":"","email":"","is_corresponding":false,"name":"M. Will"},{"affiliations":"","email":"","is_corresponding":false,"name":"P. Guillou"},{"affiliations":"","email":"","is_corresponding":false,"name":"J. Lukasczyk"},{"affiliations":"","email":"","is_corresponding":false,"name":"P. Fortin"},{"affiliations":"","email":"","is_corresponding":false,"name":"C. Garth"},{"affiliations":"","email":"","is_corresponding":false,"name":"J. Tierny"}],"award":"","doi":"10.1109/TVCG.2024.3390219","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243390219","image_caption":"","keywords":["Topological data analysis, high-performance computing, distributed-memory algorithms."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"TTK is Getting MPI-Ready","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243368621","abstract":"The use of natural language interfaces (NLIs) to create charts is becoming increasingly popular due to the intuitiveness of natural language interactions. One key challenge in this approach is to accurately capture user intents and transform them to proper chart specifications. This obstructs the wide use of NLI in chart generation, as users' natural language inputs are generally abstract (i.e., ambiguous or under-specified), without a clear specification of visual encodings. Recently, pre-trained large language models (LLMs) have exhibited superior performance in understanding and generating natural language, demonstrating great potential for downstream tasks. Inspired by this major trend, we propose ChartGPT, generating charts from abstract natural language inputs. However, LLMs are struggling to address complex logic problems. To enable the model to accurately specify the complex parameters and perform operations in chart generation, we decompose the generation process into a step-by-step reasoning pipeline, so that the model only needs to reason a single and specific sub-task during each run. Moreover, LLMs are pre-trained on general datasets, which might be biased for the task of chart generation. To provide adequate visualization knowledge, we create a dataset consisting of abstract utterances and charts and improve model performance through fine-tuning. We further design an interactive interface for ChartGPT that allows users to check and modify the intermediate outputs of each step. The effectiveness of the proposed system is evaluated through quantitative evaluations and a user study.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yuan Tian"},{"affiliations":"","email":"","is_corresponding":false,"name":"Weiwei Cui"},{"affiliations":"","email":"","is_corresponding":false,"name":"Dazhen Deng"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xinjing Yi"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yurun Yang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Haidong Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yingcai Wu"}],"award":"","doi":"10.1109/TVCG.2024.3368621","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243368621","image_caption":"","keywords":["Natural language interfaces, large language models, data visualization"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243383089","abstract":"The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Qing Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ying Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ruishi Zou"},{"affiliations":"","email":"","is_corresponding":false,"name":"Wei Shuai"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yi Guo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiazhe Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nan Cao"}],"award":"","doi":"10.1109/TVCG.2024.3383089","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243383089","image_caption":"","keywords":["Representation Learning, Multi-view Visualization, Visual Storytelling, Visualization Embedding"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Chart2Vec: A Universal Embedding of Context-Aware Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243392587","abstract":"The issue of traffic congestion poses a significant obstacle to the development of global cities. One promising solution to tackle this problem is intelligent traffic signal control (TSC). Recently, TSC strategies leveraging reinforcement learning (RL) have garnered attention among researchers. However, the evaluation of these models has primarily relied on fixed metrics like reward and queue length. This limited evaluation approach provides only a narrow view of the model\u2019s decision-making process, impeding its practical implementation. Moreover, effective TSC necessitates coordinated actions across multiple intersections. Existing visual analysis solutions fall short when applied in multi-agent settings. In this study, we delve into the challenge of interpretability in multi-agent reinforcement learning (MARL), particularly within the context of TSC. We propose MARLens, a visual analytics system tailored to understand MARL-based TSC. Our system serves as a versatile platform for both RL and TSC researchers. It empowers them to explore the model\u2019s features from various perspectives, revealing its decision-making processes and shedding light on interactions among different agents. To facilitate quick identification of critical states, we have devised multiple visualization views, complemented by a traffic simulation module that allows users to replay specific training scenarios. To validate the utility of our proposed system, we present three comprehensive case studies, incorporate insights from domain experts through interviews, and conduct a user study. These collective efforts underscore the feasibility and effectiveness of MARLens in enhancing our understanding of MARL-based TSC systems and pave the way for more informed and efficient traffic management strategies.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Yutian Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Guohong Zheng"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zhiyuan Liu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Quan Li"},{"affiliations":"","email":"","is_corresponding":true,"name":"Haipeng Zeng"}],"award":"","doi":"10.1109/TVCG.2024.3392587","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243392587","image_caption":"","keywords":["Traffic signal control, multi-agent, reinforcement learning, visual analytics"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243392476","abstract":"Areas of interest (AOIs) are well-established means of providing semantic information for visualizing, analyzing, and classifying gaze data. However, the usual manual annotation of AOIs is time consuming and further impaired by ambiguities in label assignments. To address these issues, we present an interactive labeling approach that combines visualization, machine learning, and user-centered explainable annotation. Our system provides uncertainty-aware visualization to build trust in classification with an increasing number of annotated examples. It combines specifically designed EyeFlower glyphs, dimensionality reduction, and selection and exploration techniques in an integrated workflow. The approach is versatile and hardware-agnostic, supporting video stimuli from stationary and unconstrained mobile eye trackin alike. We conducted an expert review to assess labeling strategies and trust building.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Maurice Koch"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nan Cao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Daniel Weiskopf"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kuno Kurzhals"}],"award":"","doi":"10.1109/TVCG.2024.3392476","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243392476","image_caption":"","keywords":["Visual analytics, eye tracking, uncertainty, active learning, trust building"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Active Gaze Labeling: Visualization for Trust Building","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233324851","abstract":"Dimensionality reduction (DR) algorithms are diverse and widely used for analyzing high-dimensional data. Various metrics and tools have been proposed to evaluate and interpret the DR results. However, most metrics and methods fail to be well generalized to measure any DR results from the perspective of original distribution fidelity or lack interactive exploration of DR results. There is still a need for more intuitive and quantitative analysis to interactively explore high-dimensional data and improve interpretability. We propose a metric and a generalized algorithm-agnostic approach based on the concept of capacity to evaluate and analyze the DR results. Based on our approach, we develop a visual analytic system HiLow for exploring high-dimensional data and projections. We also propose a mixed-initiative recommendation algorithm that assists users in interactively DR results manipulation. Users can compare the differences in data distribution after the interaction through HiLow. Furthermore, we propose a novel visualization design focusing on quantitative analysis of differences between high and low-dimensional data distributions. Finally, through user study and case studies, we validate the effectiveness of our approach and system in enhancing the interpretability of projections and analyzing the distribution of high and low-dimensional data.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Yang Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jisheng Liu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chufan Lai"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yuan Zhou"},{"affiliations":"","email":"","is_corresponding":true,"name":"Siming Chen"}],"award":"","doi":"10.1109/TVCG.2023.3324851","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233324851","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Interpreting High-Dimensional Projections With Capacity","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243394745","abstract":"The fund investment industry heavily relies on the expertise of fund managers, who bear the responsibility of managing portfolios on behalf of clients. With their investment knowledge and professional skills, fund managers gain a competitive advantage over the average investor in the market. Consequently, investors prefer entrusting their investments to fund managers rather than directly investing in funds. For these investors, the primary concern is selecting a suitable fund manager. While previous studies have employed quantitative or qualitative methods to analyze various aspects of fund managers, such as performance metrics, personal characteristics, and performance persistence, they often face challenges when dealing with a large candidate space. Moreover, distinguishing whether a fund manager's performance stems from skill or luck poses a challenge, making it difficult to align with investors' preferences in the selection process. To address these challenges, this study characterizes the requirements of investors in selecting suitable fund managers and proposes an interactive visual analytics system called FMLens. This system streamlines the fund manager selection process, allowing investors to efficiently assess and deconstruct fund managers' investment styles and abilities across multiple dimensions. Additionally, the system empowers investors to scrutinize and compare fund managers' performances. The effectiveness of the approach is demonstrated through two case studies and a qualitative user study. Feedback from domain experts indicates that the system excels in analyzing fund managers from diverse perspectives, enhancing the efficiency of fund manager evaluation and selection.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Longfei Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chen Cheng"},{"affiliations":"","email":"","is_corresponding":false,"name":"He Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xiyuan Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yun Tian"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xuanwu Yue"},{"affiliations":"","email":"","is_corresponding":false,"name":"Wong Kam-Kwai"},{"affiliations":"","email":"","is_corresponding":false,"name":"Haipeng Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Suting Hong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Quan Li"}],"award":"","doi":"10.1109/TVCG.2024.3394745","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243394745","image_caption":"","keywords":["Financial Data, Fund Manager Selection, Visual Analytics"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233336588","abstract":"This article explores how the ability to recall information in data visualizations depends on the presentation technology. Participants viewed 10 Isotype visualizations on a 2D screen, in 3D, in Virtual Reality (VR) and in Mixed Reality (MR). To provide a fair comparison between the three 3D conditions, we used LIDAR to capture the details of the physical rooms, and used this information to create our textured 3D models. For all environments, we measured the number of visualizations recalled and their order (2D) or spatial location (3D, VR, MR). We also measured the number of syntactic and semantic features recalled. Results of our study show increased recall and greater richness of data understanding in the MR condition. Not only did participants recall more visualizations and ordinal/spatial positions in MR, but they also remembered more details about graph axes and data mappings, and more information about the shape of the data. We discuss how differences in the spatial and kinesthetic cues provided in these different environments could contribute to these results, and reasons why we did not observe comparable performance in the 3D and VR conditions.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Christophe Hurter"},{"affiliations":"","email":"","is_corresponding":false,"name":"Bernice Rogowitz"},{"affiliations":"","email":"","is_corresponding":false,"name":"Guillaume Truong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tiffany Andry"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hugo Romat"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ludovic Gardy"},{"affiliations":"","email":"","is_corresponding":false,"name":"Fereshteh Amini"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nathalie Henry Riche"}],"award":"","doi":"10.1109/TVCG.2023.3336588","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233336588","image_caption":"","keywords":["Data visualization, Three-dimensional displays, Virtual reality, Mixed reality, Electronic mail, Syntactics, Semantics"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243372104","abstract":"With the rise of short-form video platforms and the increasing availability of data, we see the potential for people to share short-form videos embedded with data in situ (e.g., daily steps when running) to increase the credibility and expressiveness of their stories. However, creating and sharing such videos in situ is challenging since it involves multiple steps and skills (e.g., data visualization creation and video editing), especially for amateurs. By conducting a formative study (N=10) using three design probes, we collected the motivations and design requirements. We then built VisTellAR, a mobile AR authoring tool, to help amateur video creators embed data visualizations in short-form videos in situ. A two-day user study shows that participants (N=12) successfully created various videos with data visualizations in situ and they confirmed the ease of use and learning. AR pre-stage authoring was useful to assist people in setting up data visualizations in reality with more designs in camera movements and interaction with gestures and physical objects to storytelling.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Wai Tong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kento Shigyo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lin-Ping Yuan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Mingming Fan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ting-Chuen Pong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Meng Xia"}],"award":"","doi":"10.1109/TVCG.2024.3372104","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243372104","image_caption":"","keywords":["Personal data, augmented reality, data visualization, storytelling, short-form video"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243406387","abstract":"The process of labeling medical text plays a crucial role in medical research. Nonetheless, creating accurately labeled medical texts of high quality is often a time-consuming task that requires specialized domain knowledge. Traditional methods for generating labeled data typically rely on rigid rule-based approaches, which may not adapt well to new tasks. While recent machine learning (ML) methodologies have mitigated the manual labeling efforts, configuring models to align with specific research requirements can be challenging for labelers without technical expertise. Moreover, automated labeling techniques, such as transfer learning, face difficulties in in directly incorporating expert input, whereas semi-automated methods, like data programming, allow knowledge integration through rules or knowledge bases but may lack continuous result refinement throughout the entire labeling process. In this study, we present a collaborative human-ML teaming workflow that seamlessly integrates visual cluster analysis and active learning to assist domain experts in labeling medical text with high efficiency. Additionally, we introduce an innovative neural network model called the embedding network, which incorporates expert insights to generate task-specific embeddings for medical texts. We integrate the workflow and embedding network into a visual analytics tool named KMTLabeler, equipped with coordinated multi-level views and interactions. Two illustrative case studies, along with a controlled user study, provide substantial evidence of the effectiveness of KMTLabeler in creating an efficient labeling environment for medical text classification.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"He Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yang Ouyang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yuchen Wu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chang Jiang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lixia Jin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yuanwu Cao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Quan Li"}],"award":"","doi":"10.1109/TVCG.2024.3406387","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243406387","image_caption":"","keywords":["Medical Text Labeling, Expert Knowledge, Embedding Network, Visual Cluster Analysis, Active Learning"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243411786","abstract":"We present a novel method for the interactive construction and rendering of extremely large molecular scenes, capable of representing multiple biological cells in atomistic detail. Our method is tailored for scenes, which are procedurally constructed, based on a given set of building rules. Rendering of large scenes normally requires the entire scene available in-core, or alternatively, it requires out-of-core management to load data into the memory hierarchy as a part of the rendering loop. Instead of out-of-core memory management, we propose to procedurally generate the scene on-demand on the fly. The key idea is a positional- and view-dependent procedural scene-construction strategy, where only a fraction of the atomistic scene around the camera is available in the GPU memory at any given time. The atomistic detail is populated into a uniform-space partitioning using a grid that covers the entire scene. Most of the grid cells are not filled with geometry, only those are populated that are potentially seen by the camera. The atomistic detail is populated in a compute shader and its representation is connected with acceleration data structures for hardware ray-tracing of modern GPUs. Objects which are far away, where atomistic detail is not perceivable from a given viewpoint, are represented by a triangle mesh mapped with a seamless texture, generated from the rendering of geometry from atomistic detail. The algorithm consists of two pipelines, the construction-compute pipeline, and the rendering pipeline, which work together to render molecular scenes at an atomistic resolution far beyond the limit of the GPU memory containing trillions of atoms. We demonstrate our technique on multiple models of SARS-CoV-2 and the red blood cell.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Ruwayda Alharbi"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ond\u02c7rej Strnad"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tobias Klein"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ivan Viola"}],"award":"","doi":"10.1109/TVCG.2024.3411786","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243411786","image_caption":"","keywords":["Interactive rendering, view-guided scene construction, biological data, hardware ray tracing"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"\u201cNanomatrix: Scalable Construction of Crowded Biological Environments\u201d","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243408255","abstract":"Generative text-to-image models, which allow users to create appealing images through a text prompt, have seen a dramatic increase in popularity in recent years. However, most users have a limited understanding of how such models work and often rely on trial and error strategies to achieve satisfactory results. The prompt history contains a wealth of information that could provide users with insights into what has been explored and how the prompt changes impact the output image, yet little research attention has been paid to the visual analysis of such process to support users. We propose the Image Variant Graph, a novel visual representation designed to support comparing prompt-image pairs and exploring the editing history. The Image Variant Graph models prompt differences as edges between corresponding images and presents the distances between images through projection. Based on the graph, we developed the PrompTHis system through co-design with artists. Based on the review and analysis of the prompting history, users can better understand the impact of prompt changes and have a more effective control of image generation. A quantitative user study and qualitative interviews demonstrate that PrompTHis can help users review the prompt history, make sense of the model, and plan their creative process.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yuhan Guo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hanning Shao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Can Liu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kai Xu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xiaoru Yuan"}],"award":"","doi":"10.1109/TVCG.2024.3408255","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243408255","image_caption":"","keywords":["Text visualization, image visualization, text-to-image generation, editing history, provenance, generative art"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20223193756","abstract":"Information visualization uses various types of representations to encode data into graphical formats. Prior work on visualization techniques has evaluated the accuracy of perceived numerical data values from visual data encodings such as graphical position, length, orientation, size, and color. Our work aims to extend the research of graphical perception to the use of motion as data encodings for quantitative values. We present two experiments implementing multiple fundamental aspects of motion such as type, speed, and synchronicity that can be used for numerical value encoding as well as comparing motion to static visual encodings in terms of user perception and accuracy. We studied how well users can assess the differences between several types of motion and static visual encodings and present an updated ranking of accuracy for quantitative judgments. Our results indicate that non-synchronized motion can be interpreted more quickly and more accurately than synchronized motion. Moreover, our ranking of static and motion visual representations shows that motion, especially expansion and translational types, has great potential as a data encoding technique for quantitative value. Finally, we discuss the implications for the use of animation and motion for numerical representations in data visualization.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Shaghayegh Esmaeili"},{"affiliations":"","email":"","is_corresponding":false,"name":"Samia Kabir"},{"affiliations":"","email":"","is_corresponding":false,"name":"Anthony M. Colas"},{"affiliations":"","email":"","is_corresponding":false,"name":"Rhema P. Linder"},{"affiliations":"","email":"","is_corresponding":false,"name":"Eric D. Ragan"}],"award":"","doi":"10.1109/TVCG.2022.3193756","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20223193756","image_caption":"","keywords":["Information visualization, animation and motion-related techniques, empirical study, graphical perception, evaluation."],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243402610","abstract":"Point clouds are widely used as a versatile representation of 3D entities and scenes for all scale domains and in a variety of application areas, serving as a fundamental data category to directly convey spatial features. However, due to point sparsity, lack of structure, irregular distribution, and acquisition-related inaccuracies, results of point cloud visualization are often subject to visual complexity and ambiguity. In this regard, non-photorealistic rendering can improve visual communication by reducing the cognitive effort required to understand an image or scene and by directing attention to important features. In the last 20 years, this has been demonstrated by various non-photorealistic r rendering approaches that were proposed to target point clouds specifically. However, they do not use a common language or structure for assessment which complicates comparison and selection. Further, recent developments regarding point cloud characteristics and processing, such as massive data size or web-based rendering are rarely considered. To address these issues, we present a survey on non-photorealistic rendering approaches for point cloud visualization, providing an overview of the current state of research. We derive a structure for the assessment of approaches, proposing seven primary dimensions for the categorization regarding intended goals, data requirements, used techniques, and mode of operation. We then systematically assess corresponding approaches and utilize this classification to identify trends and research gaps, motivating future research in the development of effective non-photorealistic point cloud rendering methods.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Ole Wegen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Willy Scheibel"},{"affiliations":"","email":"","is_corresponding":false,"name":"Matthias Trapp"},{"affiliations":"","email":"","is_corresponding":false,"name":"Rico Richter"},{"affiliations":"","email":"","is_corresponding":false,"name":"J\u00fcrgen D\u00f6llner"}],"award":"","doi":"10.1109/TVCG.2024.3402610","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243402610","image_caption":"","keywords":["Point clouds, survey, non-photorealistic rendering"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243413195","abstract":"With the growing complexity and volume of data, visualizations have become more intricate, often requiring advanced techniques to convey insights. These complex charts are prevalent in everyday life, and individuals who lack knowledge in data visualization may find them challenging to understand. This paper investigates using Large Language Models (LLMs) to help users with low data literacy understand complex visualizations. While previous studies focus on text interactions with users, we noticed that visual cues are also critical for interpreting charts. We introduce an LLM application that supports both text and visual interaction for guiding chart interpretation. Our study with 26 participants revealed that the in-situ support effectively assisted users in interpreting charts and enhanced learning by addressing specific chart-related questions and encouraging further exploration. Visual communication allowed participants to convey their interests straightforwardly, eliminating the need for textual descriptions. However, the LLM assistance led users to engage less with the system, resulting in fewer insights from the visualizations. This suggests that users, particularly those with lower data literacy and motivation, may have over-relied on the LLM agent. We discuss opportunities for deploying LLMs to enhance visualization literacy while emphasizing the need for a balanced approach.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Kiroong Choe"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chaerin Lee"},{"affiliations":"","email":"","is_corresponding":false,"name":"Soohyun Lee"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiwon Song"},{"affiliations":"","email":"","is_corresponding":false,"name":"Aeri Cho"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nam Wook Kim"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jinwook Seo"}],"award":"","doi":"10.1109/TVCG.2024.3413195","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243413195","image_caption":"","keywords":["Visualization literacy, Large language model, Visual communication"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243397004","abstract":"Data charts are prevalent across various fields due to their efficacy in conveying complex data relationships. However, static charts may sometimes struggle to engage readers and efficiently present intricate information, potentially resulting in limited understanding. We introduce \u201cLive Charts,\u201d a new format of presentation that decomposes complex information within a chart and explains the information pieces sequentially through rich animations and accompanying audio narration. We propose an automated approach to revive static charts into Live Charts. Our method integrates GNN-based techniques to analyze the chart components and extract data from charts. Then we adopt large natural language models to generate appropriate animated visuals along with a voice-over to produce Live Charts from static ones. We conducted a thorough evaluation of our approach, which involved the model performance, use cases, a crowd-sourced user study, and expert interviews. The results demonstrate Live Charts offer a multi-sensory experience where readers can follow the information and understand the data insights better. We analyze the benefits and drawbacks of Live Charts over static charts as a new information consumption experience.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Velitchko Filipov"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alessio Arleo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Markus B\u00f6gl"},{"affiliations":"","email":"","is_corresponding":false,"name":"Silvia Miksch"}],"award":"","doi":"10.1109/TVCG.2024.3397004","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243397004","image_caption":"","keywords":["Charts, storytelling, machine learning, automatic visualization"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Reviving Static Charts into Live Charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243376406","abstract":"Visualizing event timelines for collaborative text writing is an important application for navigating and understanding such data, as time passes and the size and complexity of both text and timeline increase. They are often employed by applications such as code repositories and collaborative text editors. In this paper, we present a visualization tool to explore historical records of writing of legislative texts, which were discussed and voted on by an assembly of representatives. Our visualization focuses on event timelines from text documents that involve multiple people and different topics, allowing for observation of different proposed versions of said text or tracking data provenance of given text sections, while highlighting the connections between all elements involved. We also describe the process of designing such a tool alongside domain experts, with three steps of evaluation being conducted to verify the effectiveness of our design.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Gabriel D. Cantareira"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yiwen Xing"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nicholas Cole"},{"affiliations":"","email":"","is_corresponding":false,"name":"Rita Borgo"},{"affiliations":"","email":"","is_corresponding":true,"name":"Alfie Abdul-Rahman"}],"award":"","doi":"10.1109/TVCG.2024.3376406","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243376406","image_caption":"","keywords":["Data visualization, Collaboration, History, Humanities, Writing, Navigation, Metadata"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243411575","abstract":"Creating an animated data video with audio narration is a time-consuming and complex task that requires expertise. It involves designing complex animations, turning written scripts into audio narrations, and synchronizing visual changes with the narrations. This paper presents WonderFlow, an interactive authoring tool, that facilitates narration-centric design of animated data videos. WonderFlow allows authors to easily specify semantic links between text and the corresponding chart elements. Then it automatically generates audio narration by leveraging text-to-speech techniques and aligns the narration with an animation. WonderFlow provides a structure-aware animation library designed to ease chart animation creation, enabling authors to apply pre-designed animation effects to common visualization components. Additionally, authors can preview and refine their data videos within the same system, without having to switch between different creation tools. A series of evaluation results confirmed that WonderFlow is easy to use and simplifies the creation of data videos with narration-animation interplay.","authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Yun Wang"},{"affiliations":"","email":"","is_corresponding":true,"name":"Leixian Shen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zhengxin You"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Bongshin Lee"},{"affiliations":"","email":"","is_corresponding":false,"name":"John Thompson"},{"affiliations":"","email":"","is_corresponding":false,"name":"Haidong Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Dongmei Zhang"}],"award":"","doi":"10.1109/TVCG.2024.3411575","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243411575","image_caption":"","keywords":["Data video, Data visualization, Narration-animation interplay, Storytelling, Authoring tool"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"WonderFlow: Narration-Centric Design of Animated Data Videos","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233333356","abstract":"As urban populations grow, effectively accessing urban performance measures such as livability and comfort becomes increasingly important due to their significant socioeconomic impacts. While Point of Interest (POI) data has been utilized for various applications in location-based services, its potential for urban performance analytics remains unexplored. In this paper, we present SenseMap, a novel approach for analyzing urban performance by leveraging POI data as a semantic representation of urban functions. We quantify the contribution of POIs to different urban performance measures by calculating semantic textual similarities on our constructed corpus. We propose Semantic-adaptive Kernel Density Estimation which takes into account POIs\u2019 in\ufb02uential areas across different Traf\ufb01c Analysis Zones and semantic contributions to generate semantic density maps for measures. We design and implement a feature-rich, real-time visual analytics system for users to explore the urban performance of their surroundings. Evaluations with human judgment and reference data demonstrate the feasibility and validity of our method. Usage scenarios and user studies demonstrate the capability, usability, and explainability of our system.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Juntong Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Qiaoyun Huang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Changbo Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chenhui Li"}],"award":"","doi":"10.1109/TVCG.2023.3333356","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233333356","image_caption":"","keywords":["Urban data, semantic textual similarity, point of interest, density map, visual analytics, visualization design"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243402834","abstract":"Impact dynamics are crucial for estimating the growth patterns of NFT projects by tracking the diffusion and decay of their relative appeal among stakeholders. Machine learning methods for impact dynamics analysis are incomprehensible and rigid in terms of their interpretability and transparency, whilst stakeholders require interactive tools for informed decision-making. Nevertheless, developing such a tool is challenging due to the substantial, heterogeneous NFT transaction data and the requirements for flexible, customized interactions. To this end, we integrate intuitive visualizations to unveil the impact dynamics of NFT projects. We first conduct a formative study and summarize analysis criteria, including substitution mechanisms, impact attributes, and design requirements from stakeholders. Next, we propose the Minimal Substitution Model to simulate substitutive systems of NFT projects that can be feasibly represented as node-link graphs. Particularly, we utilize attribute-aware techniques to embed the project status and stakeholder behaviors in the layout design. Accordingly, we develop a multi-view visual analytics system, namely NFTracer, allowing interactive analysis of impact dynamics in NFT transactions. We demonstrate the informativeness, effectiveness, and usability of NFTracer by performing two case studies with domain experts and one user study with stakeholders. The studies suggest that NFT projects featuring a higher degree of similarity are more likely to substitute each other. The impact of NFT projects within substitutive systems is contingent upon the degree of stakeholders\u2019 influx and projects\u2019 freshness.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yifan Cao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Qing Shi"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lucas Shen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kani Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yang Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Wei Zeng"},{"affiliations":"","email":"","is_corresponding":false,"name":"Huamin Qu"}],"award":"","doi":"10.1109/TVCG.2024.3402834","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243402834","image_caption":"","keywords":["Stakeholders, Nonfungible Tokens, Social Networking Online, Visual Analytics, Network Analyzers, Measurement, Layout, Impact Dynamics Analysis, Non Fungible Tokens NF Ts, NFT Transaction Data, Substitutive Systems, Visual Analytics"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20243368060","abstract":"Visual analytics supports data analysis tasks within complex domain problems. However, due to the richness of data types, visual designs, and interaction designs, users need to recall and process a significant amount of information when they visually analyze data. These challenges emphasize the need for more intelligent visual analytics methods. Large language models have demonstrated the ability to interpret various forms of textual data, offering the potential to facilitate intelligent support for visual analytics. We propose LEVA, a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. To support onboarding, we use large language models to interpret visualization designs and view relationships based on system specifications. For exploration, we use large language models to recommend insights based on the analysis of system status and data to facilitate mixed-initiative exploration. For summarization, we present a selective reporting strategy to retrace analysis history through a stream visualization and generate insight reports with the help of large language models. We demonstrate how LEVA can be integrated into existing visual analytics systems. Two usage scenarios and a user study suggest that LEVA effectively aids users in conducting visual analytics.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yuheng Zhao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yixing Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yu Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xinyi Zhao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Junjie Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zekai Shao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Cagatay Turkay"},{"affiliations":"","email":"","is_corresponding":false,"name":"Siming Chen"}],"award":"","doi":"10.1109/TVCG.2024.3368060","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20243368060","image_caption":"","keywords":["Insight recommendation, mixed-initiative, interface agent, large language models, visual analytics"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"LEVA: Using Large Language Models to Enhance Visual Analytics","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20223229017","abstract":"We present V-Mail, a framework of cross-platform applications, interactive techniques, and communication protocols for improved multi-person correspondence about spatial 3D datasets. Inspired by the daily use of e-mail, V-Mail seeks to enable a similar style of rapid, multi-person communication accessible on any device; however, it aims to do this in the new context of spatial 3D communication, where limited access to 3D graphics hardware typically prevents such communication. The approach integrates visual data storytelling with data exploration, spatial annotations, and animated transitions. V-Mail ``data stories'' are exported in a standard video file format to establish a common baseline level of access on (almost) any device. The V-Mail framework also includes a series of complementary client applications and plugins that enable different degrees of story co-authoring and data exploration, adjusted automatically to match the capabilities of various devices. A lightweight, phone-based V-Mail app makes it possible to annotate data by adding captions to the video. These spatial annotations are then immediately accessible to team members running high-end 3D graphics visualization systems that also include a V-Mail client, implemented as a plugin. Results and evaluation from applying V-Mail to assist communication within an interdisciplinary science team studying Antarctic ice sheets confirm the utility of the asynchronous, cross-platform collaborative framework while also highlighting some current limitations and opportunities for future work.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Jung Who Nam"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":"","email":"","is_corresponding":false,"name":"Daniel F. Keefe"}],"award":"","doi":"10.1109/TVCG.2022.3229017","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20223229017","image_caption":"","keywords":["Human-computer interaction, visualization of scientific 3D data, communication, storytelling, immersive analytics"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-tvcg-20233261320","abstract":"In recent years, narrative visualization has gained much attention. Researchers have proposed different design spaces for various narrative visualization genres and scenarios to facilitate the creation process. As users' needs grow and automation technologies advance, increasingly more tools have been designed and developed. In this study, we summarized six genres of narrative visualization (annotated charts, infographics, timelines & storylines, data comics, scrollytelling & slideshow, and data videos) based on previous research and four types of tools (design spaces, authoring tools, ML/AI-supported tools and ML/AI-generator tools) based on the intelligence and automation level of the tools. We surveyed 105 papers and tools to study how automation can progressively engage in visualization design and narrative processes to help users easily create narrative visualizations. This research aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.","authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Qing Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Shixiong Cao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiazhe Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nan Cao"}],"award":"","doi":"10.1109/TVCG.2023.3261320","event_id":"v-tvcg","event_title":"TVCG Invited Presentations","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"v-tvcg-20233261320","image_caption":"","keywords":["Data Visualization, Automatic Visualization, Narrative Visualization, Design Space, Authoring Tools, Survey"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"tvcg0","session_room":"None","session_title":"TVCG","session_uid":"v-tvcg","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TVCG"],"time_stamp":"","title":"How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-9745375","abstract":"We consider the general problem known as job shop scheduling, in which multiple jobs consist of sequential operations that need to be executed or served by appropriate machines having limited capacities. For example, train journeys (jobs) consist of moves and stops (operations) to be served by rail tracks and stations (machines). A schedule is an assignment of the job operations to machines and times where and when they will be executed. The developers of computational methods for job scheduling need tools enabling them to explore how their methods work. At a high level of generality, we define the system of pertinent exploration tasks and a combination of visualizations capable of supporting the tasks. We provide general descriptions of the purposes, contents, visual encoding, properties, and interactive facilities of the visualizations and illustrate them with images from an example implementation in air traffic management. We justify the design of the visualizations based on the tasks, principles of creating visualizations for pattern discovery, and scalability requirements. The outcomes of our research are sufficiently general to be of use in a variety of applications.","authors":[{"affiliations":"","email":"gennady.andrienko@iais.fraunhofer.de","is_corresponding":true,"name":"Gennady Andrienko"},{"affiliations":"","email":"natalia.andrienko@iais.fraunhofer.de","is_corresponding":false,"name":"Natalia Andrienko"},{"affiliations":"","email":"jmcordero@e-crida.enaire.es","is_corresponding":false,"name":"Jose Manuel Cordero Garcia"},{"affiliations":"","email":"dirk.hecker@iais.fraunhofer.de","is_corresponding":false,"name":"Dirk Hecker"},{"affiliations":"","email":"georgev@unipi.gr","is_corresponding":false,"name":"George A. Vouros"}],"award":"","doi":"10.1109/MCG.2022.3163437","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"9745375","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-9745375","image_caption":"","keywords":["Visualization, Schedules, Task Analysis, Optimization, Job Shop Scheduling, Data Analysis, Processor Scheduling, Iterative Methods"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Supporting Visual Exploration of Iterative Job Scheduling","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-9612019","abstract":"The number of online news articles available nowadays is rapidly increasing. When exploring articles on online news portals, navigation is mostly limited to the most recent ones. The spatial context and the history of topics are not immediately accessible. To support readers in the exploration or research of articles in large datasets, we developed an interactive 3D globe visualization. We worked with datasets from multiple online news portals containing up to 45,000 articles. Using agglomerative hierarchical clustering, we represent the referenced locations of news articles on a globe with different levels of detail. We employ two interaction schemes for navigating the viewpoint on the visualization, including support for hand-held devices and desktop PCs, and provide search functionality and interactive filtering. Based on this framework, we explore additional modules for jointly exploring the spatial and temporal domain of the dataset and incorporating live news into the visualization.","authors":[{"affiliations":"","email":"nicholas.ingulfsen@gmail.com","is_corresponding":false,"name":"Nicholas Ingulfsen"},{"affiliations":"","email":"simone.schaub@visinf.tu-darmstadt.de","is_corresponding":false,"name":"Simone Schaub-Meyer"},{"affiliations":"","email":"grossm@inf.ethz.ch","is_corresponding":false,"name":"Markus Gross"},{"affiliations":"","email":"tobias.guenther@fau.de","is_corresponding":true,"name":"Tobias G\u00fcnther"}],"award":"","doi":"10.1109/MCG.2021.3127434","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"9612019","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-9612019","image_caption":"","keywords":["News Articles, Number Of Articles, Headlines, Interactive Visualization, Online News, Agglomerative Clustering, Local News, Interactive Exploration, Desktop PC, Different Levels Of Detail, News Portals, Spatial Information, User Study, 3D Space, Human-computer Interaction, Temporal Information, Third Dimension, Tablet Computer, Pie Chart, News Stories, 3D Visualization, Article Details, Visual Point, Bottom Of The Screen, Geospatial Data, Type Of Visualization, Largest Dataset, Tagging Location, Live Feed"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"News Globe: Visualization of Geolocalized News Articles","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-9866547","abstract":"In many applications, developed deep-learning models need to be iteratively debugged and refined to improve the model efficiency over time. Debugging some models, such as temporal multilabel classification (TMLC) where each data point can simultaneously belong to multiple classes, can be especially more challenging due to the complexity of the analysis and instances that need to be reviewed. In this article, focusing on video activity recognition as an application of TMLC, we propose DETOXER, an interactive visual debugging system to support finding different error types and scopes through providing multiscope explanations.","authors":[{"affiliations":"","email":"m.nourani@northeastern.edu","is_corresponding":true,"name":"Mahsan Nourani"},{"affiliations":"","email":"chiradeep.roy@utdallas.edu","is_corresponding":false,"name":"Chiradeep Roy"},{"affiliations":"","email":"dhoneycutt@ufl.edu","is_corresponding":false,"name":"Donald R. Honeycutt"},{"affiliations":"","email":"eragan@ufl.edu","is_corresponding":false,"name":"Eric D. Ragan"},{"affiliations":"","email":"vibhav.gogate@utdallas.edu","is_corresponding":false,"name":"Vibhav Gogate"}],"award":"","doi":"10.1109/MCG.2022.3201465","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"9866547","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-9866547","image_caption":"","keywords":["Debugging, Analytical Models, Heating Systems, Data Models, Computational Modeling, Activity Recognition, Deep Learning, Multi Label Classification, Visualization Tool, Temporal Classification, Visual Debugging, False Positive, False Negative, Active Components, Deep Learning Models, Types Of Errors, Video Frames, Error Detection, Detection Of Types, Action Recognition, Interactive Visualization, Sequence Of Points, Design Goals, Positive Errors, Critical Outcomes, Error Patterns, Global Panel, False Negative Rate, False Positive Rate, Heatmap, Visual Approach, Truth Labels, True Positive, Confidence Score, Anomaly Detection, Interface Elements"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10091124","abstract":"The Internet of Food (IoF) is an emerging field in smart foodsheds, involving the creation of a knowledge graph (KG) about the environment, agriculture, food, diet, and health. However, the heterogeneity and size of the KG present challenges for downstream tasks, such as information retrieval and interactive exploration. To address those challenges, we propose an interactive knowledge and learning environment (IKLE) that integrates three programming and modeling languages to support multiple downstream tasks in the analysis pipeline. To make IKLE easier to use, we have developed algorithms to automate the generation of each language. In addition, we collaborated with domain experts to design and develop a dataflow visualization system, which embeds the automatic language generations into components and allows users to build their analysis pipeline by dragging and connecting components of interest. We have demonstrated the effectiveness of IKLE through three real-world case studies in smart foodsheds.","authors":[{"affiliations":"","email":"tu.253@osu.edu","is_corresponding":true,"name":"Yamei Tu"},{"affiliations":"","email":"wang.5502@osu.edu","is_corresponding":false,"name":"Xiaoqi Wang"},{"affiliations":"","email":"qiu.580@osu.edu","is_corresponding":false,"name":"Rui Qiu"},{"affiliations":"","email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"},{"affiliations":"","email":"mmmille6@wisc.edu","is_corresponding":false,"name":"Michelle Miller"},{"affiliations":"","email":"jinmeng.rao@wisc.edu","is_corresponding":false,"name":"Jinmeng Rao"},{"affiliations":"","email":"song.gao@wisc.edu","is_corresponding":false,"name":"Song Gao"},{"affiliations":"","email":"prhuber@ucdavis.edu","is_corresponding":false,"name":"Patrick R. Huber"},{"affiliations":"","email":"adhollander@ucdavis.edu","is_corresponding":false,"name":"Allan D. Hollander"},{"affiliations":"","email":"matthew@ic-foods.org","is_corresponding":false,"name":"Matthew Lange"},{"affiliations":"","email":"cgarcia@tacc.utexas.edu","is_corresponding":false,"name":"Christian R. Garcia"},{"affiliations":"","email":"jstubbs@tacc.utexas.edu","is_corresponding":false,"name":"Joe Stubbs"}],"award":"","doi":"10.1109/MCG.2023.3263960","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10091124","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10091124","image_caption":"","keywords":["Learning Environment, Interactive Learning Environments, Programming Language, Visual System, Analysis Pipeline, Patterns In Data, Flow Data, Human-computer Interaction, Food Systems, Information Retrieval, Domain Experts, Language Model, Automatic Generation, Interactive Exploration, Cyberinfrastructure, Pre-trained Language Models, Resource Description Framework, SPARQL Query, DBpedia, Entity Types, Data Visualization, Resilience Analysis, Load Data, Query Results, Supply Chain, Network Flow"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"An Interactive Knowledge and Learning Environment in Smart Foodsheds","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10198358","abstract":"Set visualization facilitates the exploration and analysis of set-type data. However, how sets should be visualized when the data are uncertain is still an open research challenge. To address the problem of depicting uncertainty in set visualization, we ask 1) which aspects of set type data can be affected by uncertainty and 2) which characteristics of uncertainty influence the visualization design. We answer these research questions by first describing a conceptual framework that brings together 1) the information that is primarily relevant in sets (i.e., set membership, set attributes, and element attributes) and 2) different plausible categories of (un)certainty (i.e., certainty, undefined uncertainty as a binary fact, and defined uncertainty as quantifiable measure). Following the structure of our framework, we systematically discuss basic visualization examples of integrating uncertainty in set visualizations. We draw on existing knowledge about general uncertainty visualization and previous evidence of its effectiveness.","authors":[{"affiliations":"","email":"christian.tominski@uni-rostock.de","is_corresponding":false,"name":"Christian Tominski"},{"affiliations":"","email":"m.behrisch@uu.nl","is_corresponding":true,"name":"Michael Behrisch"},{"affiliations":"","email":"susanne.bleisch@fhnw.ch","is_corresponding":false,"name":"Susanne Bleisch"},{"affiliations":"","email":"sara.fabrikant@geo.uzh.ch","is_corresponding":false,"name":"Sara Irina Fabrikant"},{"affiliations":"","email":"eva.mayr@donau-uni.ac.at","is_corresponding":false,"name":"Eva Mayr"},{"affiliations":"","email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":"","email":"helen.purchase@monash.edu","is_corresponding":false,"name":"Helen Purchase"}],"award":"","doi":"10.1109/MCG.2023.3300441","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10198358","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10198358","image_caption":"","keywords":["Uncertainty, Data Visualization, Measurement Uncertainty, Visual Analytics, Terminology, Task Analysis, Surveys, Conceptual Framework, Cardinality, Data Visualization, Visual Representation, Measure Of The Amount, Set Membership, Intersection Set, Visual Design, Different Types Of Uncertainty, Missing Values, Visual Methods, Fuzzy Set, Age Of Students, Color Values, Uncertainty Values, Explicit Representation, Aggregate Value, Exact Information, Uncertain Information, Table Cells, Temporal Uncertainty, Uncertain Data, Representation Of Uncertainty, Implicit Representation, Spatial Uncertainty, Point Symbol, Visual Clutter, Color Hue, Graphical Elements, Uncertain Value"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Visualizing Uncertainty in Sets","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10227838","abstract":"We report a study investigating the viability of using interactive visualizations to aid architectural design with building codes. While visualizations have been used to support general architectural design exploration, existing computational solutions treat building codes as separate from, rather than part of, the design process, creating challenges for architects. Through a series of participatory design studies with professional architects, we found that interactive visualizations have promising potential to aid design exploration and sensemaking in early stages of architectural design by providing feedback about potential allowances and consequences of design decisions. However, implementing a visualization system necessitates addressing the complexity and ambiguity inherent in building codes. To tackle these challenges, we propose various user-driven knowledge management mechanisms for integrating, negotiating, interpreting, and documenting building code rules.","authors":[{"affiliations":"","email":"snowak@sfu.ca","is_corresponding":true,"name":"Stan Nowak"},{"affiliations":"","email":"bon.aseniero@autodesk.com","is_corresponding":false,"name":"Bon Adriel Aseniero"},{"affiliations":"","email":"lyn@sfu.ca","is_corresponding":false,"name":"Lyn Bartram"},{"affiliations":"","email":"tovi@dgp.toronto.edu","is_corresponding":false,"name":"Tovi Grossman"},{"affiliations":"","email":"George.fitzmaurice@autodesk.com","is_corresponding":false,"name":"George Fitzmaurice"},{"affiliations":"","email":"justin.matejka@autodesk.com","is_corresponding":false,"name":"Justin Matejka"}],"award":"","doi":"10.1109/MCG.2023.3307971","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10227838","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10227838","image_caption":"","keywords":[],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10078374","abstract":"Existing dynamic weighted graph visualization approaches rely on users\u2019 mental comparison to perceive temporal evolution of dynamic weighted graphs, hindering users from effectively analyzing changes across multiple timeslices. We propose DiffSeer, a novel approach for dynamic weighted graph visualization by explicitly visualizing the differences of graph structures (e.g., edge weight differences) between adjacent timeslices. Specifically, we present a novel nested matrix design that overviews the graph structure differences over a time period as well as shows graph structure details in the timeslices of user interest. By collectively considering the overall temporal evolution and structure details in each timeslice, an optimization-based node reordering strategy is developed to group nodes with similar evolution patterns and highlight interesting graph structure details in each timeslice. We conducted two case studies on real-world graph datasets and in-depth interviews with 12 target users to evaluate DiffSeer. The results demonstrate its effectiveness in visualizing dynamic weighted graphs.","authors":[{"affiliations":"","email":"wenxiaolin@stu.scu.edu.cn","is_corresponding":false,"name":"Xiaolin Wen"},{"affiliations":"","email":"yongwang@smu.edu.sg","is_corresponding":true,"name":"Yong Wang"},{"affiliations":"","email":"wumeixuan@stu.scu.edu.cn","is_corresponding":false,"name":"Meixuan Wu"},{"affiliations":"","email":"wangfengjie@stu.scu.edu.cn","is_corresponding":false,"name":"Fengjie Wang"},{"affiliations":"","email":"xuanwu.yue@connect.ust.hk","is_corresponding":false,"name":"Xuanwu Yue"},{"affiliations":"","email":"shenqm@sustech.edu.cn","is_corresponding":false,"name":"Qiaomu Shen"},{"affiliations":"","email":"mayx@sustech.edu.cn","is_corresponding":false,"name":"Yuxin Ma"},{"affiliations":"","email":"zhumin@scu.edu.cn","is_corresponding":false,"name":"Min Zhu"}],"award":"","doi":"10.1109/MCG.2023.3248289","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10078374","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10078374","image_caption":"","keywords":["Visibility Graph, Spatial Patterns, Weight Change, In-depth Interviews, Temporal Changes, Temporal Evolution, Negative Changes, Interesting Patterns, Edge Weights, Real-world Datasets, Graph Structure, Visual Approach, Dynamic Visualization, Dynamic Graph, Financial Networks, Graph Datasets, Similar Evolutionary Patterns, User Interviews, Similar Changes, Chinese New Year, Sector Indices, Original Graph, Red Rectangle, Nodes In Order, Stock Market Crash, Stacked Bar Charts, Different Types Of Matrices, Chinese New, Blue Rectangle"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"DiffSeer: Difference-Based Dynamic Weighted Graph Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10128890","abstract":"Some 15 years ago, Visualization Viewpoints published an influential article titled Rainbow Color Map (Still) Considered Harmful (Borland and Taylor, 2007). The paper argued that the \u201crainbow colormap\u2019s characteristics of confusing the viewer, obscuring the data and actively misleading interpretation make it a poor choice for visualization.\u201d Subsequent articles often repeat and extend these arguments, so much so that avoiding rainbow colormaps, along with their derivatives, has become dogma in the visualization community. Despite this loud and persistent recommendation, scientists continue to use rainbow colormaps. Have we failed to communicate our message, or do rainbow colormaps offer advantages that have not been fully appreciated? We argue that rainbow colormaps have properties that are underappreciated by existing design conventions. We explore key critiques of the rainbow in the context of recent research to understand where and how rainbows might be misunderstood. Choosing a colormap is a complex task, and rainbow colormaps can be useful for selected applications.","authors":[{"affiliations":"","email":"cware@ccom.unh.edu","is_corresponding":false,"name":"Colin Ware"},{"affiliations":"","email":"mstone@acm.org","is_corresponding":true,"name":"Maureen Stone"},{"affiliations":"","email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"}],"award":"","doi":"10.1109/MCG.2023.3246111","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10128890","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10128890","image_caption":"","keywords":["Image Color Analysis, Semantics, Data Visualization, Estimation, Reliability Engineering"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Rainbow Colormaps Are Not All Bad","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10207831","abstract":"The membership function is to categorize quantities along with a confidence degree. This article investigates a generic user interaction based on this function for categorizing various types of quantities without modification, which empowers users to articulate uncertainty categorization and enhance their visual data analysis significantly. We present the technique design and an online prototype, supplementing with insights from three case studies that highlight the technique\u2019s efficacy among different types of quantities. Furthermore, we conduct a formal user study to scrutinize the process and reasoning users employ while utilizing our technique. The findings indicate that our technique can help users create customized categories. Both our code and the interactive prototype are made available as open-source resources, intended for application across varied domains as a generic tool.","authors":[{"affiliations":"","email":"liuliqun.cs@gmail.com","is_corresponding":true,"name":"Liqun Liu"},{"affiliations":"","email":"romain.vuillemot@ec-lyon.fr","is_corresponding":false,"name":"Romain Vuillemot"}],"award":"","doi":"10.1109/MCG.2023.3301449","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10207831","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10207831","image_caption":"","keywords":["Data Visualization, Uncertainty, Prototypes, Fuzzy Logic, Image Color Analysis, Fuzzy Sets, Open Source Software, General Function, Membership Function, User Study, Classification Process, Fuzzy Logic, Quantitative Values, Visualization Techniques, Amount Of Type, Fuzzy Theory, General Interaction, Temperature Dataset, Interaction Techniques, Carbon Dioxide, Computation Time, Rule Based, Web Page, Real World Scenarios, Fuzzy Set, Domain Experts, Supercritical CO 2, Parallel Coordinates, Fuzzy System, Fuzzy Clustering, Interactive Visualization, Amount Of Items, Large Scale Problems"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"A Generic Interactive Membership Function for Categorization of Quantities","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10201383","abstract":"Although visualizations are a useful tool for helping people to understand information, they can also have unintended effects on human cognition. This is especially true for uncertain information, which is difficult for people to understand. Prior work has found that different methods of visualizing uncertain information can produce different patterns of decision making from users. However, uncertainty can also be represented via text or numerical information, and few studies have systematically compared these types of representations to visualizations of uncertainty. We present two experiments that compared visual representations of risk (icon arrays) to numerical representations (natural frequencies) in a wildfire evacuation task. Like prior studies, we found that different types of visual cues led to different patterns of decision making. In addition, our comparison of visual and numerical representations of risk found that people were more likely to evacuate when they saw visualizations than when they saw numerical representations. These experiments reinforce the idea that design choices are not neutral: seemingly minor differences in how information is represented can have important impacts on human risk perception and decision making.","authors":[{"affiliations":"","email":"lematze@sandia.gov","is_corresponding":true,"name":"Laura E. Matzen"},{"affiliations":"","email":"bchowel@sandia.gov","is_corresponding":false,"name":"Breannan C. Howell"},{"affiliations":"","email":"mctrumb@sandia.gov","is_corresponding":false,"name":"Michael C. S. Trumbo"},{"affiliations":"","email":"kmdivis@sandia.gov","is_corresponding":false,"name":"Kristin M. Divis"}],"award":"","doi":"10.1109/MCG.2023.3299875","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10201383","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10201383","image_caption":"","keywords":["Visualization, Uncertainty, Decision Making, Costs, Task Analysis, Laboratories, Information Analysis, Decision Making, Visual Representation, Numerical Representation, Decision Patterns, Deterministic, Risk Perception, Specific Information, Fundamental Frequency, Point Values, Representation Of Information, Risk Information, Visual Conditions, Numerous Conditions, Human Decision, Numerical Information, Impact Of Different Types, Uncertain Information, Type Of Visualization, Differences In Risk Perception, Representation Of Uncertainty, Increase In Participation, Participants In Experiment, Individual Difference Measures, Sandia National Laboratories, Risk Propensity, Bonus Payments, Average Response Time, Difference In Probability, Response Time"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10414267","abstract":"Traditional approaches to data visualization have often focused on comparing different subsets of data, and this is reflected in the many techniques developed and evaluated over the years for visual comparison. Similarly, common workflows for exploratory visualization are built upon the idea of users interactively applying various filter and grouping mechanisms in search of new insights. This paradigm has proven effective at helping users identify correlations between variables that can inform thinking and decision-making. However, recent studies show that consumers of visualizations often draw causal conclusions even when not supported by the data. Motivated by these observations, this article highlights recent advances from a growing community of researchers exploring methods that aim to directly support visual causal inference. However, many of these approaches have their own limitations, which limit their use in many real-world scenarios. This article, therefore, also outlines a set of key open challenges and corresponding priorities for new research to advance the state of the art in visual causal inference.","authors":[{"affiliations":"","email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":"","email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":"","email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"award":"","doi":"10.1109/MCG.2023.3338788","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10414267","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10414267","image_caption":"","keywords":["Analytical Models, Correlation, Visual Analytics, Decision Making, Data Visualization, Reliability Theory, Cognition, Inference Algorithms, Causal Inference, Causality, Social Media, Exploratory Analysis, Data Visualization, Visual Representation, Visual Analysis, Visualization Tool, Open Challenges, Interactive Visualization, Assembly Line, Different Subsets Of Data, Visual Analytics Tool, Data Driven Decision Making, Data Quality, Statistical Models, Causal Effect, Visual System, Use Of Social Media, Bar Charts, Causal Model, Causal Graph, Chart Types, Directed Acyclic Graph, Visual Design, Portion Of The Dataset, Causal Structure, Prior Section, Causal Explanations, Line Graph"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Using Counterfactuals to Improve Causal Inferences From Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"v-cga-10478355","abstract":"Recent developments in artificial intelligence (AI) and machine learning (ML) have led to the creation of powerful generative AI methods and tools capable of producing text, code, images, and other media in response to user prompts. Significant interest in the technology has led to speculation about what fields, including visualization, can be augmented or replaced by such approaches. However, there remains a lack of understanding about which visualization activities may be particularly suitable for the application of generative AI. Drawing on examples from the field, we map current and emerging capabilities of generative AI across the different phases of the visualization lifecycle and describe salient opportunities and challenges.","authors":[{"affiliations":"","email":"rahul.basole@accenture.com","is_corresponding":false,"name":"Rahul C. Basole"},{"affiliations":"","email":"timothy.major@accenture.com","is_corresponding":true,"name":"Timothy Major"}],"award":"","doi":"10.1109/MCG.2024.3362168","event_id":"v-cga","event_title":"CG&A Invited Partnership Presentations","external_paper_link":"","fno":"10478355","has_fno":true,"has_image":false,"has_pdf":false,"id":"v-cga-10478355","image_caption":"","keywords":["Generative AI, Art, Artificial Intelligence, Machine Learning, Visualization, Media, Augmented Reality, Machine Learning, Visual Representation, Professional Knowledge, Creative Process, Domain Experts, Generalization Capability, Development Of Artificial Intelligence, Artificial Intelligence Capabilities, Iterative Process, Natural Language, Commercial Software, Hallucinations, Team Sports, Design Requirements, Intelligence Agencies, Recommender Systems, User Requirements, Iterative Design, Use Of Artificial Intelligence, Visual Design, Phase Assemblage, Data Literacy"],"paper_type":"full","paper_type_color":"#1C3160","paper_type_name":"VIS Full Paper","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"cga0","session_room":"None","session_title":"CG&A","session_uid":"v-cga","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["CG&A"],"time_stamp":"","title":"Generative AI for Visualization: Opportunities and Challenges","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-topoinvis-1027","abstract":"Advances in high-performance computing require new ways to represent large-scale scientific data to support data storage, data transfers, and data analysis within scientific workflows. Multivariate functional approximation (MFA) has recently emerged as a new continuous meshless representation that approximates raw discrete data with a set of piecewise smooth functions. An MFA model of data thus offers a compact representation and supports high-order evaluation of values and derivatives anywhere in the domain. In this paper, we present CPE-MFA, the first critical point extraction framework designed for MFA models of large-scale, high-dimensional data. CPE-MFA extracts critical points directly from an MFA model without the need for discretization or resampling. This is the first step toward enabling continuous implicit models such as MFA to support topological data analysis at scale.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"guanqunma94@gmail.com","is_corresponding":true,"name":"Guanqun Ma"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"dlenz@anl.gov","is_corresponding":false,"name":"David Lenz"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"tpeterka@mcs.anl.gov","is_corresponding":false,"name":"Tom Peterka"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"}],"award":"","doi":"","event_id":"w-topoinvis","event_title":"TopoInVis: Workshop on Topological Data Analysis and Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-topoinvis-1027","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-topoinvis0","session_room":"None","session_title":"TopoInVis","session_uid":"w-topoinvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TopoInVis"],"time_stamp":"","title":"Critical Point Extraction from Multivariate Functional Approximation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-topoinvis-1031","abstract":"3D symmetric tensor fields have a wide range of applications in science and engineering. The topology of such fields can provide critical insight into not only the structures in tensor fields but also their respective applications. Existing research focuses on the extraction of topological features such as degenerate curves and neutral surfaces. In this paper, we investigate the asymptotic behaviors of these topological features in the sphere of infinity. Our research leads to both theoretical analysis and observations that can aid further classifications of tensor field topology.","authors":[{"affiliations":["Oregon State University, Corvallis, United States"],"email":"linxinw@oregonstate.edu","is_corresponding":false,"name":"Xinwei Lin"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhangyue@oregonstate.edu","is_corresponding":false,"name":"Yue Zhang"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhange@eecs.oregonstate.edu","is_corresponding":true,"name":"Eugene Zhang"}],"award":"","doi":"","event_id":"w-topoinvis","event_title":"TopoInVis: Workshop on Topological Data Analysis and Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-topoinvis-1031","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-topoinvis0","session_room":"None","session_title":"TopoInVis","session_uid":"w-topoinvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TopoInVis"],"time_stamp":"","title":"Asymptotic Topology of 3D Linear Symmetric Tensor Fields","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-topoinvis-1033","abstract":"Jacobi sets are an important method to investigate the relationship between Morse functions. The Jacobi set for two Morse functions is the set of all points where the functions' gradients are linearly dependent. Both the segmentation of the domain by Jacobi sets and the Jacobi sets themselves have proven to be useful tools in multi-field visualization, data analysis in various applications, and for accelerating extraction algorithms. On a triangulated grid, they can be calculated by a piecewise linear interpolation. In practice, Jacobi sets can become very complex and large due to noise and numerical errors. Some techniques for simplifying Jacobi sets exist, but these only reduce individual elements such as noise or are purely theoretical. These techniques often only change the visual representation of the Jacobi sets, but not the underlying data. In this paper, we present an algorithm that simplifies the Jacobi sets for 2D bivariate scalar fields and at the same time modifies the underlying bivariate scalar fields while preserving the essential structures of the fields. We use a neighborhood graph to select the areas to be reduced and collapse these cells individually. We investigate the influence of different neighborhood graphs and present an adaptation for the visualization of Jacobi sets that take the collapsed cells into account. We apply our algorithm to a range of analytical and real-world data sets and compare it with established methods that also simplify the underlying bivariate scalar fields.","authors":[{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"raith@informatik.uni-leipzig.de","is_corresponding":true,"name":"Felix Raith"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"scheuermann@informatik.uni-leipzig.de","is_corresponding":false,"name":"Gerik Scheuermann"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"heine@informatik.uni-leipzig.de","is_corresponding":false,"name":"Christian Heine"}],"award":"","doi":"","event_id":"w-topoinvis","event_title":"TopoInVis: Workshop on Topological Data Analysis and Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-topoinvis-1033","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-topoinvis0","session_room":"None","session_title":"TopoInVis","session_uid":"w-topoinvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TopoInVis"],"time_stamp":"","title":"Topological Simplifcation of Jacobi Sets for Piecewise-Linear Bivariate 2D Scalar Fields","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-topoinvis-1034","abstract":"The Morse-Smale complex is a standard tool in visual data analysis. The classic definition is based on a continuous view of the gradient of a scalar function where its zeros are the critical points. These points are connected via gradient curves and surfaces emanating from saddle points, known as separatrices. In a discrete setting, the Morse-Smale complex is commonly extracted by constructing a combinatorial gradient assuming the steepest descent direction. Previous works have shown that this method results in a geometric embedding of the separatrices that can be fundamentally different from those in the continuous case. To achieve a similar embedding, different approaches for constructing a combinatorial gradient were proposed. In this paper, we show that these approaches generate a different topology, i.e., the connectivity between critical points changes. Additionally, we demonstrate that the steepest descent method can compute topologically and geometrically accurate Morse-Smale complexes when applied to certain types of grids. Based on these observations, we suggest a method to attain both geometric and topological accuracy for the Morse-Smale complex of data sampled on a uniform grid.","authors":[{"affiliations":["KTH Royal Institute of Technology, Stockholm, Sweden"],"email":"sonlt@kth.se","is_corresponding":true,"name":"Son Le Thanh"},{"affiliations":["KTH Royal Institute of Technology, Stockholm, Sweden"],"email":"ankele@iai.uni-bonn.de","is_corresponding":false,"name":"Michael Ankele"},{"affiliations":["KTH Royal Institute of Technology, Stockholm, Sweden"],"email":"weinkauf@kth.se","is_corresponding":false,"name":"Tino Weinkauf"}],"award":"","doi":"","event_id":"w-topoinvis","event_title":"TopoInVis: Workshop on Topological Data Analysis and Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-topoinvis-1034","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-topoinvis0","session_room":"None","session_title":"TopoInVis","session_uid":"w-topoinvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TopoInVis"],"time_stamp":"","title":"Revisiting Accurate Geometry for the Morse-Smale Complexes","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-topoinvis-1038","abstract":"This paper presents a nested tracking framework for analyzing cycles in 2D force networks within granular materials. These materials are composed of interacting particles, whose interactions are described by a force network. Understanding the cycles within these networks at various scales and their evolution under external loads is crucial, as they significantly contribute to the mechanical and kinematic properties of the system. Our approach involves computing a cycle hierarchy by partitioning the 2D domain into regions bounded by cycles in the force network. We can adapt concepts from nested tracking graphs originally developed for merge trees by leveraging the duality between this partitioning and the cycles. We demonstrate the effectiveness of our method on two force networks derived from experiments with photo-elastic disks.","authors":[{"affiliations":["Link\u00f6ping University, Link\u00f6ping, Sweden"],"email":"farhan.rasheed@liu.se","is_corresponding":true,"name":"Farhan Rasheed"},{"affiliations":["Indian Institute of Science, Bangalore, India"],"email":"abrarnaseer@iisc.ac.in","is_corresponding":false,"name":"Abrar Naseer"},{"affiliations":["Link\u00f6ping university, Norrk\u00f6ping, Sweden"],"email":"emma.nilsson@liu.se","is_corresponding":false,"name":"Emma Nilsson"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"talha.bin.masood@liu.se","is_corresponding":false,"name":"Talha Bin Masood"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"ingrid.hotz@liu.se","is_corresponding":false,"name":"Ingrid Hotz"}],"award":"","doi":"","event_id":"w-topoinvis","event_title":"TopoInVis: Workshop on Topological Data Analysis and Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-topoinvis-1038","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-topoinvis0","session_room":"None","session_title":"TopoInVis","session_uid":"w-topoinvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TopoInVis"],"time_stamp":"","title":"Multi-scale Cycle Tracking in Dynamic Planar Graphs","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-topoinvis-1041","abstract":"Tetrahedral meshes are widely used due to their flexibility and adaptability in representing changes of complex geometries and topology. However, most existing data structures struggle to efficiently encode the irregular connectivity of tetrahedral meshes with billions of vertices. We address this problem by proposing a novel framework for efficient and scalable analysis of large tetrahedral meshes using Apache Spark. The proposed framework, called Tetra-Spark, features optimized approaches to locally compute many connectivity relations by first retrieving the Vertex-Tetrahedron (VT) relation. This strategy significantly improves Tetra-Spark's efficiency in performing morphology computations on large tetrahedral meshes. To prove the effectiveness and scalability of such a framework, we conduct a comprehensive comparison against a vanilla Spark implementation for the analysis of tetrahedral meshes. Our experimental evaluation shows that Tetra-Spark achieves up to a 78x speedup and reduces memory usage by up to 80% when retrieving connectivity relations with the VT relation available. This optimized design further accelerates subsequent morphology computations, resulting in up to a 47.7x speedup.","authors":[{"affiliations":["University of Maryland, College Park, College Park, United States"],"email":"yhqian@umd.edu","is_corresponding":true,"name":"Yuehui Qian"},{"affiliations":["Clemson University, Clemson, United States"],"email":"guoxil@clemson.edu","is_corresponding":false,"name":"Guoxi Liu"},{"affiliations":["Clemson University, Clemson, United States"],"email":"fiurici@clemson.edu","is_corresponding":false,"name":"Federico Iuricich"},{"affiliations":["University of Maryland, College Park, United States"],"email":"deflo@umiacs.umd.edu","is_corresponding":false,"name":"Leila De Floriani"}],"award":"","doi":"","event_id":"w-topoinvis","event_title":"TopoInVis: Workshop on Topological Data Analysis and Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-topoinvis-1041","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-topoinvis0","session_room":"None","session_title":"TopoInVis","session_uid":"w-topoinvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["TopoInVis"],"time_stamp":"","title":"Efficient representation and analysis for a large tetrahedral mesh using Apache Spark","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"a-ldav-1002","abstract":"Cuneiform is the earliest known system of writing, first developed for the Sumerian language of southern Mesopotamia in the second half of the 4th millennium BC. Cuneiform signs are obtained by impressing a stylus on fresh clay tablets. For certain purposes, e.g. authentication by seal imprint, some cuneiform tablets were enclosed in clay envelopes, which cannot be opened without destroying them. The aim of our interdisciplinary project is the non-invasive study of clay tablets. A portable X-ray micro-CT scanner is developed to acquire density data of such artifacts on a high-resolution, regular 3D grid at collection sites. The resulting volume data is processed through feature-preserving denoising, extraction of high-accuracy surfaces using a manifold dual marching cubes algorithm and extraction of local features by enhanced curvature rendering and ambient occlusion. For the non-invasive study of cuneiform inscriptions, the tablet is virtually separated from its envelope by curvature-based segmentation. The computational- and data-intensive algorithms are optimized for near-real-time offline usage with limited resources at collection sites. To visualize the complexity-reduced and octree-based compressed representation of surfaces, we develop and implement an interactive application. To facilitate the analysis of such clay tablets, we implement shape-based feature extraction algorithms to enhance cuneiform recognition. Our workflow supports innovative 3D display and interaction techniques such as autostereoscopic displays and gesture control.","authors":[{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"stephan.olbrich@uni-hamburg.de","is_corresponding":true,"name":"Stephan Olbrich"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"andreas.beckert@uni-hamburg.de","is_corresponding":false,"name":"Andreas Beckert"},{"affiliations":["Centre National de la Recherche Scientifique (CNRS), Nanterre, France"],"email":"cecile.michel@cnrs.fr","is_corresponding":false,"name":"C\u00e9cile Michel"},{"affiliations":["Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany","Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"christian.schroer@desy.de","is_corresponding":false,"name":"Christian Schroer"},{"affiliations":["Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany","Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"samaneh.ehteram@desy.de","is_corresponding":false,"name":"Samaneh Ehteram"},{"affiliations":["Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany"],"email":"andreas.schropp@desy.de","is_corresponding":false,"name":"Andreas Schropp"},{"affiliations":["Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany"],"email":"philipp.paetzold@desy.de","is_corresponding":false,"name":"Philipp Paetzold"}],"award":"","doi":"","event_id":"a-ldav","event_title":"LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"a-ldav-1002","image_caption":"","keywords":[],"paper_type":"associated","paper_type_color":"#2672B9","paper_type_name":"Associated Event","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"a-ldav0","session_room":"None","session_title":"LDAV","session_uid":"a-ldav","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["LDAV"],"time_stamp":"","title":"Efficient Analysis and Visualization of High-Resolution Computed Tomography Data for the Exploration of Enclosed Cuneiform Tablets","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"a-ldav-1003","abstract":"Dimensionality reduction (DR) is a well-established approach for the visualization of high-dimensional data sets. While DR methods are often applied to typical DR benchmark data sets in the literature, they might suffer from high runtime complexity and memory requirements, making them unsuitable for large data visualization especially in environments outside of high-performance computing. To perform DR on large data sets, we propose the use of out-of-sample extensions. Such extensions allow inserting new data into existing projections, which we leverage to iteratively project data into a reference projection that consists only of a small manageable subset. This process makes it possible to perform DR out-of-core on large data, which would otherwise not be possible due to memory and runtime limitations. For metric multidimensional scaling (MDS), we contribute an implementation with out-of-sample projection capability since typical software libraries do not support it. We provide an evaluation of the projection quality of five common DR algorithms (MDS, PCA, t-SNE, UMAP, and autoencoders) using quality metrics from the literature and analyze the trade-off between the size of the reference set and projection quality. The runtime behavior of the algorithms is also quantified with respect to reference set size, out-of-sample batch size, and dimensionality of the data sets. Furthermore, we compare the out-of-sample approach to other recently introduced DR methods, such as PaCMAP and TriMAP, which claim to handle larger data sets than traditional approaches. To showcase the usefulness of DR on this large scale, we contribute a use case where we analyze ensembles of streamlines amounting to one billion projected instances.","authors":[{"affiliations":["Universit\u00e4t Stuttgart, Stuttgart, Germany"],"email":"lucareichmann01@gmail.com","is_corresponding":false,"name":"Luca Marcel Reichmann"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"david.haegele@visus.uni-stuttgart.de","is_corresponding":true,"name":"David H\u00e4gele"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"}],"award":"","doi":"","event_id":"a-ldav","event_title":"LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"a-ldav-1003","image_caption":"","keywords":[],"paper_type":"associated","paper_type_color":"#2672B9","paper_type_name":"Associated Event","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"a-ldav0","session_room":"None","session_title":"LDAV","session_uid":"a-ldav","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["LDAV"],"time_stamp":"","title":"Out-of-Core Dimensionality Reduction for Large Data via Out-of-Sample Extensions","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"a-ldav-1006","abstract":"Scientists generate petabytes of data daily to help uncover environmental trends or behaviors that are hard to predict. For example, understanding climate simulations based on the long-term average of temperature, precipitation, and other environmental variables is essential to predicting and establishing root causes of future undesirable scenarios and assessing possible mitigation strategies. Unfortunately, bottlenecks in petascale workflows restrict scientists' ability to analyze and visualize the necessary information due to requirements for extensive computational resources, obstacles in data accessibility, and inefficient analysis algorithms. This paper presents an approach to managing, visualizing, and analyzing petabytes of data within a browser on equipment ranging from the top NASA supercomputer to commodity hardware like a laptop. Our approach is based on a novel data fabric abstraction layer that allows querying scientific information in a form that is user-friendly while hiding the complexities of dealing with file systems or cloud services. We also optimize network utilization while streaming from petascale repositories through state-of-the-art progressive compression algorithms. Based on this abstraction, we provide customizable dashboards that can be accessed from any device with an internet connection, offering straightforward access to vast amounts of data typically not available to those without access to uniquely expensive hardware resources. Our dashboards provide and improve the ability to access and, more importantly, use massive data for a wide range of users, from top scientists with access to leadership-class computing environments to undergraduate students of disadvantaged backgrounds from minority-serving institutions. We focus on NASA's use of petascale climate datasets as an example of particular societal impact and, therefore, a case where achieving equity in science participation is critical. In particular, we validate our approach by improving the ability of climate scientist to explore their data even on the top NASA supercomputer, introducing the ability to study their data in a fully interactive environment instead of being limited to using pre-choreographed videos that can take days to generate each. We also successfully introduced the same dashboards and simplified training material in an undergraduate class on Geospatial Analysis in a minority-serving campus (Utah State Banding) with 69% of the Native American students and 86% being low-income. The same dashboards are also released in simplified form to the general public, providing an unparalleled democratization for the access and use of climate data that can be extended to most scientific domains.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"aashishpanta0@gmail.com","is_corresponding":true,"name":"Aashish Panta"},{"affiliations":["Scientific Computing and Imaging Institute, Salt Lake City, United States"],"email":"xuanhuang@sci.utah.edu","is_corresponding":false,"name":"Xuan Huang"},{"affiliations":["NASA Ames Research Center, Mountain View, United States"],"email":"nina.mccurdy@gmail.com","is_corresponding":false,"name":"Nina McCurdy"},{"affiliations":["NASA, mountain View, United States"],"email":"david.ellsworth@nasa.gov","is_corresponding":false,"name":"David Ellsworth"},{"affiliations":["university of Utah, Salt lake city, United States"],"email":"amy.a.gooch@gmail.com","is_corresponding":false,"name":"Amy Gooch"},{"affiliations":["university of Utah, Salt lake city, United States"],"email":"scrgiorgio@gmail.com","is_corresponding":false,"name":"Giorgio Scorzelli"},{"affiliations":["NASA, Pasadena, United States"],"email":"hector.torres.gutierrez@jpl.nasa.gov","is_corresponding":false,"name":"Hector Torres"},{"affiliations":["caltech, Pasadena, United States"],"email":"pklein@caltech.edu","is_corresponding":false,"name":"Patrice Klein"},{"affiliations":["Utah State University Blanding, Blanding, United States"],"email":"gustavo.ovando@usu.edu","is_corresponding":false,"name":"Gustavo Ovando-Montejo"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"pascucci.valerio@gmail.com","is_corresponding":false,"name":"Valerio Pascucci"}],"award":"","doi":"","event_id":"a-ldav","event_title":"LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"a-ldav-1006","image_caption":"","keywords":[],"paper_type":"associated","paper_type_color":"#2672B9","paper_type_name":"Associated Event","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"a-ldav0","session_room":"None","session_title":"LDAV","session_uid":"a-ldav","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["LDAV"],"time_stamp":"","title":"Web-based Visualization and Analytics of Petascale data: Equity as a Tide that Lifts All Boats","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"a-ldav-1011","abstract":"This paper describes the adaptation of a well-scaling parallel algorithm for computing Morse-Smale segmentations based on path compression to a distributed computational setting. Additionally, we extend the algorithm to efficiently compute connected components in distributed structured and unstructured grids, based either on the connectivity of the underlying mesh or a feature mask. Our implementation is seamlessly integrated with the distributed extension of the Topology ToolKit (TTK), ensuring robust performance and scalability. To demonstrate the practicality and efficiency of our algorithms, we conducted a series of scaling experiments on large-scale datasets, with sizes of up to 4096^3 vertices on up to 64 nodes and 768 cores.","authors":[{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"mswill@rhrk.uni-kl.de","is_corresponding":true,"name":"Michael Will"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"jl@jluk.de","is_corresponding":false,"name":"Jonas Lukasczyk"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"garth@rptu.de","is_corresponding":false,"name":"Christoph Garth"}],"award":"","doi":"","event_id":"a-ldav","event_title":"LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"a-ldav-1011","image_caption":"","keywords":[],"paper_type":"associated","paper_type_color":"#2672B9","paper_type_name":"Associated Event","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"a-ldav0","session_room":"None","session_title":"LDAV","session_uid":"a-ldav","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["LDAV"],"time_stamp":"","title":"Distributed Path Compression for Piecewise Linear Morse-Smale Segmentations and Connected Components","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"a-ldav-1016","abstract":"We propose and discuss a paradigm that allows for expressing data- parallel rendering with the classically non-parallel ANARI API. We propose this as a new standard for data-parallel rendering, describe two different implementations of this paradigm, and use multiple sample integrations into existing applications to show how easy it is to adopt, and what can be gained from doing so.","authors":[{"affiliations":["NVIDIA, Salt Lake City, United States"],"email":"ingowald@gmail.com","is_corresponding":false,"name":"Ingo Wald"},{"affiliations":["University of Cologne, Cologne, Germany"],"email":"zellmann@uni-koeln.de","is_corresponding":true,"name":"Stefan Zellmann"},{"affiliations":["NVIDIA, Austin, United States"],"email":"jeffamstutz@gmail.com","is_corresponding":false,"name":"Jefferson Amstutz"},{"affiliations":["University of California, Davis, Davis, United States"],"email":"qadwu@ucdavis.edu","is_corresponding":false,"name":"Qi Wu"},{"affiliations":["NVIDIA, Santa Clara, United States"],"email":"kgriffin@nvidia.com","is_corresponding":false,"name":"Kevin Shawn Griffin"},{"affiliations":["VSB - Technical University of Ostrava, Ostrava, Czech Republic"],"email":"milan.jaros@vsb.cz","is_corresponding":false,"name":"Milan Jaro\u0161"},{"affiliations":["University of Cologne, Cologne, Germany"],"email":"wesner@uni-koeln.de","is_corresponding":false,"name":"Stefan Wesner"}],"award":"","doi":"","event_id":"a-ldav","event_title":"LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"a-ldav-1016","image_caption":"","keywords":[],"paper_type":"associated","paper_type_color":"#2672B9","paper_type_name":"Associated Event","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"a-ldav0","session_room":"None","session_title":"LDAV","session_uid":"a-ldav","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["LDAV"],"time_stamp":"","title":"Standardized Data-Parallel Rendering Using ANARI","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"a-ldav-1018","abstract":"Functional approximation as a high-order continuous representation provides a more accurate value and gradient query compared to the traditional discrete volume representation. Volume visualization directly rendered from functional approximation generates high-quality rendering results without high-order artifacts caused by trilinear interpolations. However, querying an encoded functional approximation is computationally expensive, especially when the input dataset is large, making functional approximation impractical for interactive visualization. In this paper, we proposed a novel functional approximation multi-resolution representation, Adaptive-FAM, which is lightweight and fast to query. We also design a GPU-accelerated out-of-core multi-resolution volume visualization framework that directly utilizes the Adaptive-FAM representation to generate high-quality rendering with interactive responsiveness. Our method can not only dramatically decrease the caching time, one of the main contributors to input latency, but also effectively improve the cache hit rate through prefetching. Our approach significantly outperforms the traditional function approximation method in terms of input latency while maintaining comparable rendering quality.","authors":[{"affiliations":["University of Nebraska-Lincoln, Lincoln, United States"],"email":"jianxin.sun@huskers.unl.edu","is_corresponding":true,"name":"Jianxin Sun"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"dlenz@anl.gov","is_corresponding":false,"name":"David Lenz"},{"affiliations":["University of Nebraska-Lincoln, Lincoln, United States"],"email":"yu@cse.unl.edu","is_corresponding":false,"name":"Hongfeng Yu"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"tpeterka@mcs.anl.gov","is_corresponding":false,"name":"Tom Peterka"}],"award":"","doi":"","event_id":"a-ldav","event_title":"LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"a-ldav-1018","image_caption":"","keywords":[],"paper_type":"associated","paper_type_color":"#2672B9","paper_type_name":"Associated Event","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"a-ldav0","session_room":"None","session_title":"LDAV","session_uid":"a-ldav","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["LDAV"],"time_stamp":"","title":"Adaptive Multi-Resolution Encoding for Interactive Large-Scale Volume Visualization through Functional Approximation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"s-vds-1000","abstract":"Efficient public transport systems are crucial for sustainable urban development as cities face increasing mobility demands. Yet, many public transport networks struggle to meet diverse user needs due to historical development, urban constraints, and financial limitations. Traditionally, planning of transport network structure is often based on limited surveys, expert opinions, or partial usage statistics. This provides an incomplete basis for decision-making. We introduce an data-driven approach to public transport planning and optimization, calculating detailed accessibility measures at the individual housing level. Our visual analytics workflow combines population-group-based simulations with dynamic infrastructure analysis, utilizing a scenario-based model to simulate daily travel patterns of varied demographic groups, including schoolchildren, students, workers, and pensioners. These population groups, each with unique mobility requirements and routines, interact with the transport system under different scenarios traveling to and from Points of Interest (POI), assessed through travel time calculations. Results are visualized through heatmaps, density maps, and network overlays, as well as detailed statistics. Our system allows us to analyze both the underlying data and simulation results on multiple levels of granularity, delivering both broad insights and granular details. Case studies with the city of Konstanz, Germany reveal key areas where public transport does not meet specific needs, confirmed through a formative user study. Due to the high cost of changing legacy networks, our analysis facilitates the identification of strategic enhancements, such as optimized schedules or rerouting, and few targeted stop relocations, highlighting consequential variations in accessibility to pinpointing critical service gaps. Our research advances urban transport analytics by providing policymakers and citizens with a system that delivers both broad insights with granular detail into public transport services for a data-driven quality assessment at housing-level detail.","authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"yannick.metz@uni-konstanz.de","is_corresponding":false,"name":"Yannick Metz"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"dennis-fabian.ackermann@uni-konstanz.de","is_corresponding":false,"name":"Dennis Ackermann"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"max.fischer@uni-konstanz.de","is_corresponding":true,"name":"Maximilian T. Fischer"}],"award":"","doi":"","event_id":"s-vds","event_title":"VDS: Visualization in Data Science Symposium","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"s-vds-1000","image_caption":"","keywords":[],"paper_type":"associated","paper_type_color":"#2672B9","paper_type_name":"Associated Event","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"s-vds0","session_room":"None","session_title":"VDS","session_uid":"s-vds","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["VDS"],"time_stamp":"","title":"Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"s-vds-1002","abstract":"This position paper explores the interplay between automation and human involvement in data science. It synthesizes perspectives from Automated Data Science (AutoDS) and Interactive Data Visualization (VIS), which traditionally represent opposing ends of the human-machine spectrum. While AutoDS aims to enhance efficiency by reducing human tasks, VIS emphasizes the importance of nuanced understanding, innovation, and context provided by human involvement. This paper examines these dichotomies through an online survey and advocates for a balanced approach that harmonizes the efficiency of automation with the irreplaceable insights of human expertise. Ultimately, we address the essential question of not just what we can automate, but what we should automate, seeking strategies that prioritize technological advancement alongside the fundamental need for human oversight.","authors":[{"affiliations":["Tufts University, Boston, United States"],"email":"jen@cs.tufts.edu","is_corresponding":true,"name":"Jen Rogers"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"mehdi.chakhchoukh@universite-paris-saclay.fr","is_corresponding":false,"name":"Mehdi Chakhchoukh"},{"affiliations":["Leiden Universiteit, Leiden, Netherlands"],"email":"anastacio@aim.rwth-aachen.de","is_corresponding":false,"name":"Marie Anastacio"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":false,"name":"Rebecca Faust"},{"affiliations":["University of Warwick, Coventry, United Kingdom"],"email":"cagatay.turkay@warwick.ac.uk","is_corresponding":false,"name":"Cagatay Turkay"},{"affiliations":["University of Wyoming, Laramie, United States"],"email":"larsko@uwyo.edu","is_corresponding":false,"name":"Lars Kotthoff"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"steffen.koch@vis.uni-stuttgart.de","is_corresponding":false,"name":"Steffen Koch"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"andreas.kerren@liu.se","is_corresponding":false,"name":"Andreas Kerren"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"bernard@ifi.uzh.ch","is_corresponding":false,"name":"J\u00fcrgen Bernard"}],"award":"","doi":"","event_id":"s-vds","event_title":"VDS: Visualization in Data Science Symposium","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"s-vds-1002","image_caption":"","keywords":[],"paper_type":"associated","paper_type_color":"#2672B9","paper_type_name":"Associated Event","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"s-vds0","session_room":"None","session_title":"VDS","session_uid":"s-vds","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["VDS"],"time_stamp":"","title":"Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"s-vds-1007","abstract":"Categorical data does not have an intrinsic definition of distance or order, and therefore, established visualization techniques for categorical data only allow for a set-based or frequency-based analysis, e.g., through Euler diagrams or Parallel Sets, and do not support a similarity-based analysis. We present a novel dimensionality reduction-based visualization for categorical data, which is based on defining the distance of two data items as the number of varying attributes. Our technique enables users to pre-attentively detect groups of similar data items and observe the properties of the projection, such as attributes strongly influencing the embedding. Our prototype visually encodes data properties in an enhanced scatterplot-like visualization, visualizing attributes in the background to show the distribution of categories. In addition, we propose two graph-based measures to quantify the plot's visual quality, which rank attributes according to their contribution to cluster cohesion. To demonstrate the capabilities of our similarity-based projection method, we compare it to Euler diagrams and Parallel Sets regarding visual scalability and evaluate it quantitatively on seven real-world datasets using a range of common quality measures. Further, we validate the benefits of our approach through an expert study with five data scientists analyzing the Titanic and Mushroom dataset with up to 23 attributes and 8124 category combinations. Our results indicate that our Categorical Data Map offers an effective analysis method for large datasets with a high number of category combinations.","authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"frederik.dennig@uni-konstanz.de","is_corresponding":true,"name":"Frederik L. Dennig"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"lucas.joos@uni-konstanz.de","is_corresponding":false,"name":"Lucas Joos"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"patrick.paetzold@uni-konstanz.de","is_corresponding":false,"name":"Patrick Paetzold"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"blumbergdaniela@gmail.com","is_corresponding":false,"name":"Daniela Blumberg"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"oliver.deussen@uni-konstanz.de","is_corresponding":false,"name":"Oliver Deussen"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"max.fischer@uni-konstanz.de","is_corresponding":false,"name":"Maximilian T. Fischer"}],"award":"","doi":"","event_id":"s-vds","event_title":"VDS: Visualization in Data Science Symposium","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"s-vds-1007","image_caption":"","keywords":[],"paper_type":"associated","paper_type_color":"#2672B9","paper_type_name":"Associated Event","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"s-vds0","session_room":"None","session_title":"VDS","session_uid":"s-vds","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["VDS"],"time_stamp":"","title":"The Categorical Data Map: A Multidimensional Scaling-Based Approach","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"s-vds-1013","abstract":"Clustering is an essential technique across various domains, such as data science, machine learning, and eXplainable Artificial Intelligence. Information visualization and visual analytics techniques have been proven to effectively support human involvement in the visual exploration of clustered data to enhance the understanding and refinement of cluster assignments. This paper presents an attempt of a deep and exhaustive evaluation of the perceptive aspects of clustering quality metrics, focusing on the Davies-Bouldin Index, Dunn Index, Calinski-Harabasz Index, and Silhouette Score. Our research is centered around two main objectives: a) assessing the human perception of common CVIs in 2D scatterplots and b) exploring the potential of Large Language Models (LLMs), in particular GPT-4o, to emulate the assessed human perception. By discussing the obtained results, highlighting limitations, and areas for further exploration, this paper aims to propose a foundation for future research activities.","authors":[{"affiliations":["Sapienza University of Rome, Rome, Italy"],"email":"blasilli@diag.uniroma1.it","is_corresponding":true,"name":"Graziano Blasilli"},{"affiliations":["Northeastern University, Boston, United States"],"email":"kerrigan.d@northeastern.edu","is_corresponding":false,"name":"Daniel Kerrigan"},{"affiliations":["Northeastern University, Boston, United States"],"email":"e.bertini@northeastern.edu","is_corresponding":false,"name":"Enrico Bertini"},{"affiliations":["Sapienza University of Rome, Rome, Italy"],"email":"santucci@diag.uniroma1.it","is_corresponding":false,"name":"Giuseppe Santucci"}],"award":"","doi":"","event_id":"s-vds","event_title":"VDS: Visualization in Data Science Symposium","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"s-vds-1013","image_caption":"","keywords":[],"paper_type":"associated","paper_type_color":"#2672B9","paper_type_name":"Associated Event","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"s-vds0","session_room":"None","session_title":"VDS","session_uid":"s-vds","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["VDS"],"time_stamp":"","title":"Towards a Visual Perception-Based Analysis of Clustering Quality Metrics","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"s-vds-1021","abstract":"Recommender systems have become integral to digital experiences, shaping user interactions and preferences across various platforms. Despite their widespread use, these systems often suffer from algorithmic biases that can lead to unfair and unsatisfactory user experiences. This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems. By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration, stereotypes, and filter bubbles affect their recommendations. Informed by in-depth user interviews, both general users and researchers can benefit from increased transparency and personalized impact assessments, ultimately fostering a better understanding of algorithmic biases and contributing to more equitable recommendation outcomes. This work provides valuable insights for future research and practical applications in mitigating bias and enhancing fairness in machine learning algorithms.","authors":[{"affiliations":["University of Pittsburgh, Pittsburgh, United States"],"email":"yongsu.ahn@pitt.edu","is_corresponding":true,"name":"Yongsu Ahn"},{"affiliations":["School of Computing and Information, University of Pittsburgh, Pittsburgh, United States"],"email":"quinnkwolter@gmail.com","is_corresponding":false,"name":"Quinn K Wolter"},{"affiliations":["Quest Diagnostics, Pittsburgh, United States"],"email":"jonilyndick@gmail.com","is_corresponding":false,"name":"Jonilyn Dick"},{"affiliations":["Quest Diagnostics, Pittsburgh, United States"],"email":"janetad99@gmail.com","is_corresponding":false,"name":"Janet Dick"},{"affiliations":["University of Pittsburgh, Pittsburgh, United States"],"email":"yurulin@pitt.edu","is_corresponding":false,"name":"Yu-Ru Lin"}],"award":"","doi":"","event_id":"s-vds","event_title":"VDS: Visualization in Data Science Symposium","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"s-vds-1021","image_caption":"","keywords":[],"paper_type":"associated","paper_type_color":"#2672B9","paper_type_name":"Associated Event","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"s-vds0","session_room":"None","session_title":"VDS","session_uid":"s-vds","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["VDS"],"time_stamp":"","title":"Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"s-vds-1029","abstract":"This position paper discusses the profound impact of Large Language Models (LLMs) on semantic change, emphasizing the need for comprehensive monitoring and visualization techniques. Building on established concepts from linguistics, we examine the interdependency between mental and language models, discussing how LLMs influence and are influenced by human cognition and societal context. We introduce three primary theories to conceptualize such influences: Recontextualization, Standardization, and Semantic Dementia, illustrating how LLMs drive, standardize, and potentially degrade language semantics. Our subsequent review categorizes methods for visualizing semantic change into frequency-based, embedding-based, and context-based techniques, being first in assessing their effectiveness in capturing linguistic evolution: Embedding-based methods are highlighted as crucial for a detailed semantic analysis, reflecting both broad trends and specific linguistic changes. We underscore the need for novel visual, interactive tools to monitor and explain semantic changes induced by LLMs, ensuring the preservation of linguistic diversity and mitigating linguistic biases. This work provides essential insights for future research on semantic change visualization and the dynamic nature of language evolution in the times of LLMs.","authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"raphael.buchmueller@uni-konstanz.de","is_corresponding":true,"name":"Raphael Buchm\u00fcller"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"friederike.koerte@uni-konstanz.de","is_corresponding":false,"name":"Friederike K\u00f6rte"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"}],"award":"","doi":"","event_id":"s-vds","event_title":"VDS: Visualization in Data Science Symposium","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"s-vds-1029","image_caption":"","keywords":[],"paper_type":"associated","paper_type_color":"#2672B9","paper_type_name":"Associated Event","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"s-vds0","session_room":"None","session_title":"VDS","session_uid":"s-vds","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["VDS"],"time_stamp":"","title":"Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1001","abstract":"I analyze the evolution of papers certified by the Graphics Replicability Stamp Initiative (GRSI) to be reproducible, with a specific focus on the subset of publications that address visualization-related topics. With this analysis I show that, while the number of papers is increasing overall and within the visualization field, we still have to improve quite a bit to escape the replication crisis. I base my analysis on the data published by the GRSI as well as publication data for the different venues in visualization and lists of journal papers that have been presented at visualization-focused conferences. I also analyze the differences between the involved journals as well as the percentage of reproducible papers in the different presentation venues. Furthermore, I look at the authors of the publications and, in particular, their affiliation countries to see where most reproducible papers come from. Finally, I discuss potential reasons for the low reproducibility numbers and suggest possible ways to overcome these obstacles. This paper is reproducible itself, with source code and data available from github.com/tobiasisenberg/Visualization-Reproducibility as well as a free paper copy and all supplemental materials at osf.io/mvnbj.","authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":true,"name":"Tobias Isenberg"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1001","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"The State of Reproducibility Stamps for Visualization Research Papers","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1004","abstract":"In the rapidly evolving field of information visualization, rigorous evaluation is essential for validating new techniques, understanding user interactions, and demonstrating the effectiveness of visualizations. The evaluation of visualization systems is fundamental to ensuring their effectiveness, usability, and impact. Faithful evaluations provide valuable insights into how users interact with and perceive the system, enabling designers to make informed decisions about design choices and improvements. However, an emerging trend of multiple evaluations within a single study raises critical questions about the sustainability, feasibility, and methodological rigor of such an approach. So, how many evaluations are enough? is a situational question and cannot be formulaically determined. Our objective is to summarize current trends and patterns to understand general practices across different contribution and evaluation types. New researchers and students, influenced by this trend, may believe-- multiple evaluations are necessary for a study. However, the number of evaluations in a study should depend on its contributions and merits, not on the trend of including multiple evaluations to strengthen a paper. In this position paper, we identify this trend through a non-exhaustive literature survey of TVCG papers from issue 1 in 2023 and 2024. We then discuss various evaluation strategy patterns in the information visualization field and how this paper will open avenues for further discussion.","authors":[{"affiliations":["University of North Carolina at Chapel Hill, Chapel Hill, United States"],"email":"flin@unc.edu","is_corresponding":false,"name":"Feng Lin"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"dilshadur@sci.utah.edu","is_corresponding":false,"name":"Md Dilshadur Rahman"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":true,"name":"Ghulam Jilani Quadri"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1004","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"How Many Evaluations are Enough? A Position Paper on Evaluation Trend in Information Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1005","abstract":"Various standardized tests exist that assess individuals' visualization literacy. Their use can help to draw conclusions from studies. However, it is not taken into account that the test itself can create a pressure situation where participants might fear being exposed and assessed negatively. This is especially problematic when testing domain experts in design studies. We conducted interviews with experts from different domains performing the Mini-VLAT test for visualization literacy to identify potential problems. Our participants reported that the time limit per question, ambiguities in the questions and visualizations, and missing steps in the test procedure mainly had an impact on their performance and content. We discuss possible changes to the test design to address these issues and how such assessment methods could be integrated into existing evaluation procedures.","authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"seyda.oeney@visus.uni-stuttgart.de","is_corresponding":true,"name":"Seyda \u00d6ney"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"moataz.abdelaal@visus.uni-stuttgart.de","is_corresponding":false,"name":"Moataz Abdelaal"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"kuno.kurzhals@visus.uni-stuttgart.de","is_corresponding":false,"name":"Kuno Kurzhals"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"paul.betz@sowi.uni-stuttgart.de","is_corresponding":false,"name":"Paul Betz"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"cordula.kropp@sowi.uni-stuttgart.de","is_corresponding":false,"name":"Cordula Kropp"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1005","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1007","abstract":"In visualization, the process of transforming raw data into visually comprehensible representations is pivotal. While existing models like the Information Visualization Reference Model describe the data-to-visual mapping process, they often overlook a crucial intermediary step: design-specific transformations. This process, occurring after data transformation but before visual-data mapping, further derives data, such as groupings, layout, and statistics, that are essential to properly render the visualization. In this paper, we advocate for a deeper exploration of design-specific transformations, highlighting their importance in understanding visualization properties, particularly in relation to user tasks. We incorporate design-specific transformations into the Information Visualization Reference Model and propose a new formalism that encompasses the user task as a function over data. The resulting formalism offers three key benefits over existing visualization models: (1) describing tasks as compositions of functions, (2) enabling analysis of data transformations for visual-data mapping, and (3) empowering reasoning about visualization correctness and effectiveness. We further discuss the potential implications of this model on visualization theory and visualization experiment design.","authors":[{"affiliations":["Columbia University, New York City, United States"],"email":"ewu@cs.columbia.edu","is_corresponding":true,"name":"eugene Wu"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1007","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"Design-Specific Transforms In Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1008","abstract":"Stress is among the most commonly employed quality metrics and optimization criteria for dimension reduction projections of high-dimensional data. Complex, high-dimensional data is ubiquitous across many scientific disciplines, including machine learning, biology, and the social sciences. One of the primary methods of visualizing these datasets is with two-dimensional scatter plots that visually capture some properties of the data. Because visually determining the accuracy of these plots is challenging, researchers often use quality metrics to measure the projection\u2019s accuracy or faithfulness to the full data. One of the most commonly employed metrics, normalized stress, is sensitive to uniform scaling (stretching, shrinking) of the projection, despite this act not meaningfully changing anything about the projection. We investigate the effect of scaling on stress and other distance-based quality metrics analytically and empirically by showing just how much the values change and how this affects dimension reduction technique evaluations. We introduce a simple technique to make normalized stress scale-invariant and show that it accurately captures expected behavior on a small benchmark.","authors":[{"affiliations":["University of Arizona, Tucson, United States"],"email":"ksmelser@arizona.edu","is_corresponding":false,"name":"Kiran Smelser"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"jacobmiller1@arizona.edu","is_corresponding":true,"name":"Jacob Miller"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"stephen.kobourov@tum.de","is_corresponding":false,"name":"Stephen Kobourov"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1008","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"Normalized Stress is Not Normalized: How to Interpret Stress Correctly","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1009","abstract":"The cognitive processes involved in understanding and misunderstanding visualizations have not yet been fully clarified, even for well-studied designs, such as bar charts. In particular, little is known about whether viewers can improve their learning processes by getting better insight into their own cognition. This paper describes a simple method to measure the role of such metacognitive understanding when learning to read bar charts. For this purpose, we conducted an experiment in which we investigated bar chart learning repeatedly, and tested how learning over trials was effected by metacognitive understanding. We integrate the findings into a model of metacognitive processing of visualizations, and discuss implications for the design of visualizations.","authors":[{"affiliations":["Heidelberg University, Heidelberg, Germany"],"email":"antonia.schlieder@t-online.de","is_corresponding":true,"name":"Antonia Schlieder"},{"affiliations":["Heidelberg University, Heidelberg, Germany"],"email":"jan.rummel@psychologie.uni-heidelberg.de","is_corresponding":false,"name":"Jan Rummel"},{"affiliations":["Ruprecht-Karls-Universit\u00e4t Heidelberg, Heidelberg, Germany"],"email":"palbers@mathi.uni-heidelberg.de","is_corresponding":false,"name":"Peter Albers"},{"affiliations":["Heidelberg University, Heidelberg, Germany"],"email":"sadlo@uni-heidelberg.de","is_corresponding":false,"name":"Filip Sadlo"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1009","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"The Role of Metacognition in Understanding Deceptive Bar Charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1015","abstract":"Empirical studies in visualisation often compare visual representations to identify the most effective visualisation for a particular visual judgement or decision making task. However, the effectiveness of a visualisation may be intrinsically related to, and difficult to distinguish from, factors such as visualisation literacy. Complicating matters further, visualisation literacy itself is not a singular intrinsic quality, but can be a result of several distinct challenges that a viewer encounters when performing a task with a visualisation. In this paper, we describe how such challenges apply to experiments that we use to evaluate visualisations, and discuss a set of considerations for designing studies in the future. Finally, we argue that aspects of the study design which are often neglected or overlooked (such as the onboarding of participants, tutorials, training etc.) can have a big role in the results of a study and can potentially impact the conclusions that the researchers can draw from the study.","authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"abhraneel@u.northwestern.edu","is_corresponding":true,"name":"Abhraneel Sarma"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"shenglong@u.northwestern.edu","is_corresponding":false,"name":"Sheng Long"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1015","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1016","abstract":"This position paper critically examines the graphical inference framework for evaluating visualizations using the lineup task. We present a re-analysis of lineup task data using signal detection theory, applying four Bayesian non-linear models to investigate whether color ramps with more color name variation increase false discoveries. Our study utilizes data from Reda and Szafir\u2019s previous work [20], corroborating their findings while providing additional insights into sensitivity and bias differences across colormaps and individuals. We suggest improvements to lineup study designs and explore the connections between graphical inference, signal detection theory, and statistical decision theory. Our work contributes a more perceptually grounded approach for assessing visualization effectiveness and offers a path forward for better aligning graphical inference methods with human cognition. The results have implications for the development and evaluation of visualizations, particularly for exploratory data analysis scenarios. Supplementary materials are available at https://osf.io/xd5cj/.","authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"shenglong@u.northwestern.edu","is_corresponding":true,"name":"Sheng Long"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1016","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"Old Wine in a New Bottle? Analysis of Visual Lineups with Signal Detection Theory","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1018","abstract":"Visualising personal experiences is often described as a means for self-reflection, shaping one\u2019s identity, and sharing it with others. In policymaking, personal narratives are regarded as an important source of intelligence to shape public discourse and policy. Therefore, policymakers are interested in the interplay between individual-level experiences and macro-political processes that play into shaping these experiences. In this context, visualisation is regarded as a medium for advocacy, creating a power balance between individuals and the power structures that influence their health and well-being. In this paper, we offer a politically-framed reflection on how visualisation creators define lived experience data, and what design choices they make for visualising them. We identify data characteristics and design choices that enable visualisation authors and consumers to engage in a process of narrative co-construction, while navigating structural forms of inequality. Our political framing is driven by ideas of master and alternative narratives from Diversity Science, in which authors and narrators engage in a process of negotiation with power structures to either maintain or challenge the status quo.","authors":[{"affiliations":["City, University of London, London, United Kingdom"],"email":"mai.elshehaly@city.ac.uk","is_corresponding":true,"name":"Mai Elshehaly"},{"affiliations":["City, University of London, London, United Kingdom"],"email":"mirela.reljan-delaney@city.ac.uk","is_corresponding":false,"name":"Mirela Reljan-Delaney"},{"affiliations":["City, University of London, London, United Kingdom"],"email":"j.dykes@city.ac.uk","is_corresponding":false,"name":"Jason Dykes"},{"affiliations":["City, University of London, London, United Kingdom"],"email":"a.slingsby@city.ac.uk","is_corresponding":false,"name":"Aidan Slingsby"},{"affiliations":["City, University of London, London, United Kingdom"],"email":"j.d.wood@city.ac.uk","is_corresponding":false,"name":"Jo Wood"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"sam.spiegel@ed.ac.uk","is_corresponding":false,"name":"Sam Spiegel"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1018","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"Visualising Lived Experience: Learning from a Master and Alternative Narrative Framing","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1020","abstract":"The generation and presentation of counterfactual explanations (CFEs) are a commonly used, model-agnostic approach to helping end-users reason about the validity of AI/ML model outputs. By demonstrating how sensitive the model's outputs are to minor variations, CFEs are thought to improve understanding of the model's behavior, identify potential biases, and increase the transparency of 'black box models'. Here, we examine how CFEs support a diverse audience, both with and without technical expertise, to understand the results of an LLM-informed sentiment analysis. We conducted a preliminary pilot study with ten individuals with varied expertise from ranging NLP, ML, and ethics, to specific domains. All individuals were actively using or working with AI/ML technology as part of their daily jobs. Through semi-structured interviews grounded in a set of concrete examples, we examined how CFEs influence participants' perceptions of the model's correctness, fairness, and trustworthiness, and how visualization of CFEs specifically influences those perceptions. We also surface how participants wrestle with their internal definitions of `explainability', relative to what CFEs present, their cultures, and backgrounds, in addition to the, much more widely studied phenomena, of comparing their baseline expectations of the model's performance. Compared to prior research, our findings highlight the sociotechnical frictions that CFEs surface but do not necessarily remedy. We conclude with the design implications of developing transparent AI/ML visualization systems for more general tasks.","authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"amcrisan@uwaterloo.ca","is_corresponding":true,"name":"Anamaria Crisan"},{"affiliations":["Tableau Software, Seattle, United States"],"email":"nbutters@salesforce.com","is_corresponding":false,"name":"Nathan Butters"},{"affiliations":["Tableau Software, Seattle, United States"],"email":"zoezoezoe.cc@gmail.com","is_corresponding":false,"name":"Zoe Zoe"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1020","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"Exploring Subjective Notions of Explainability through Counterfactual Visualization of Sentiment Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1021","abstract":"The replication crisis has spawned a revolution in scientific methods, aimed at increasing the transparency, robustness, and reliability of scientific outcomes. In particular, the practice of preregistering study designs has shown important advantages. Preregistration can help limit questionable research practices, as well as increase the success rate of study replications. Many fields have now adopted preregistration as a default expectation for published studies. In 2022, we set up a panel ``Merits and Limits of User Study Preregistration'' with the overall goal of explaining the concept of preregistration to a wide VIS audience and discussing its suitability for visualization research. We report on the arguments and discussion of this panel in the hope that it can benefit the visualization community at large. All materials and a copy of this paper are available on our OSF repository at https://osf.io/wes57/.","authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":true,"name":"Lonni Besan\u00e7on"},{"affiliations":["University of Virginia, Charlottesville, United States"],"email":"nosek@virginia.edu","is_corresponding":false,"name":"Brian Nosek"},{"affiliations":["Tilburg University, Tilburg, Netherlands"],"email":"t.l.haven@tilburguniversity.edu","is_corresponding":false,"name":"Tamarinde Haven"},{"affiliations":["Link\u00f6ping University, N\u00f6rrkoping, Sweden"],"email":"miriah.meyer@liu.se","is_corresponding":false,"name":"Miriah Meyer"},{"affiliations":["Northeastern University, Boston, United States"],"email":"c.dunne@northeastern.edu","is_corresponding":false,"name":"Cody Dunne"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"mohammad.ghoniem@gmail.com","is_corresponding":false,"name":"Mohammad Ghoniem"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1021","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"Merits and Limits of Preregistration for Visualization Research","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1026","abstract":"Despite 30+ years of academic practice, visualization still lacks an explanation of how and why it functions in complex organizations performing knowledge work. This survey examines the intersection of organizational studies and visualization design, highlighting the concept of \\textit{boundary objects}, which visualization practitioners are adopting in both CSCW (computer-supported collaborative work) and HCI. This paper also collects the prior literature on boundary objects in visualization design studies, a methodology which maps closely to action research in organizations, and addresses the same problems of `knowing in common'. Process artifacts generated by visualization design studies function as boundary objects in their own right, facilitating knowledge transfer across disciplines within an organization. Currently, visualization faces the challenge of explaining how sense-making functions across domains, through visualization artifacts, and how these support decision-making. As a deeply interdisciplinary field, visualization should adopt the theory of boundary objects in order to embrace its plurality of domains and systems, whilst empowering its practitioners with a unified process-based theory.","authors":[{"affiliations":["UC Santa Cruz, Santa Cruz, United States"],"email":"jtotto@ucsc.edu","is_corresponding":true,"name":"Jasmine Tan Otto"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"sd@scottdavidoff.com","is_corresponding":false,"name":"Scott Davidoff"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1026","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"Visualization Artifacts are Boundary Objects","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1027","abstract":"Foundation models for vision and language are the basis of AI applications across numerous sectors of society. The success of these models stems from their ability to mimic human capabilities, namely visual perception in vision models, and analytical reasoning in large language models. As visual perception and analysis are fundamental to data visualization, in this position paper we ask: how can we harness foundation models to advance progress in visualization design? Specifically, how can multimodal foundation models (MFMs) guide visualization design through visual perception? We approach these questions by investigating the effectiveness of MFMs for perceiving visualization, and formalizing the overall visualization design and optimization space. Specifically, we think that MFMs can best be viewed as judges, equipped with the ability to criticize visualizations, and provide us with actions on how to improve a visualization. We provide a deeper characterization for text-to-image generative models, and multi-modal large language models, organized by what these models provide as output, and how to utilize the output for guiding design decisions. We hope that our perspective can inspire researchers in visualization on how to approach MFMs for visualization design.","authors":[{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"matthew.berger@vanderbilt.edu","is_corresponding":true,"name":"Matthew Berger"},{"affiliations":["Lawrence Livermore National Laboratory , Livermore, United States"],"email":"shusenl@sci.utah.edu","is_corresponding":false,"name":"Shusen Liu"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1027","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"[position paper] The Visualization JUDGE : Can Multimodal Foundation Models Guide Visualization Design Through Visual Perception?","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1033","abstract":"Submissions of original research that use Large Language Models (LLMs) or that study their behavior, suddenly account for a sizable portion of works submitted and accepted to visualization (VIS) conferences and similar venues in human-computer interaction (HCI). In this brief position paper, I argue that reviewers are relatively unprepared to evaluate these submissions effectively. To support this conjecture I reflect on my experience serving on four program committees for VIS and HCI conferences over the past year. I will describe common reviewer critiques that I observed and highlight how these critiques influence the review process. I also raise some concerns about these critiques that could limit applied LLM research to all but the best-resourced labs. While I conclude with suggestions for evaluating research contributions that incorporate LLMs, the ultimate goal of this position paper is to simulate a discussion on the review process and its challenges.","authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"amcrisan@uwaterloo.ca","is_corresponding":true,"name":"Anamaria Crisan"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1033","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"We Don't Know How to Assess LLM Contributions in VIS/HCI","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1034","abstract":"This paper revisits the role of quantitative and qualitative methods in visualization research in the context of advancements in artificial intelligence (AI). The focus is on how we can bridge between the different methods in an integrated process of analyzing user study data. To this end, a process model of - potentially iterated - semantic enrichment of data is proposed. This joint perspective of data and semantics facilitates the integration of quantitative and qualitative methods. The model is motivated by examples of prior work, especially in the area of eye tracking user studies and coding data-rich observations. Finally, there is a discussion of open issues and research opportunities in the interplay between AI and qualitative and quantitative methods for visualization research.","authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":true,"name":"Daniel Weiskopf"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1034","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in the Light of Advanced AI","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1035","abstract":"Complexity is often seen as a inherent negative in information design, with the job of the designer being to reduce or eliminate complexity, and with principles like Tufte\u2019s \u201cdata-ink ratio\u201d or \u201cchartjunk\u201d to operationalize minimalism and simplicity in visualizations. However, in this position paper, we call for a more expansive view of complexity as a design material, like color or texture or shape: an element of information design that can be used in many ways, many of which are beneficial to the goals of using data to understand the world around us. We describe complexity as a phenomenon that occurs not just in visual design but in every aspect of the sensemaking process, from data collection to interpretation. For each of these stages, we present examples of ways that these various forms of complexity can be used (or abused) in visualization design. We ultimately call on the visualization community to build a more nuanced view of complexity, to look for places to usefully integrate complexity in multiple stages of the design process, and, even when the goal is to reduce complexity, to look for the non-visual forms of complexity that may have otherwise been overlooked.","authors":[{"affiliations":["University for Continuing Education Krems, Krems, Austria"],"email":"florian.windhager@donau-uni.ac.at","is_corresponding":true,"name":"Florian Windhager"},{"affiliations":["King's College London, London, United Kingdom"],"email":"alfie.abdulrahman@gmail.com","is_corresponding":false,"name":"Alfie Abdul-Rahman"},{"affiliations":["University of Applied Sciences Potsdam, Potsdam, Germany"],"email":"mark-jan.bludau@fh-potsdam.de","is_corresponding":false,"name":"Mark-Jan Bludau"},{"affiliations":["Warwick Institute for the Science of Cities, Coventry, United Kingdom"],"email":"nicole.hengesbach@posteo.de","is_corresponding":false,"name":"Nicole Hengesbach"},{"affiliations":["University of Amsterdam, Amsterdam, Netherlands"],"email":"h.lamqaddam@uva.nl","is_corresponding":false,"name":"Houda Lamqaddam"},{"affiliations":["OCAD University, Toronto, Canada"],"email":"meirelles.isabel@gmail.com","is_corresponding":false,"name":"Isabel Meirelles"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"b.speckmann@tue.nl","is_corresponding":false,"name":"Bettina Speckmann"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1035","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"Complexity as Design Material","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-beliv-1037","abstract":"Qualitative data analysis is widely adopted for user evaluation, not only in the Visualisation community but also related communities, such as Human-Computer Interaction and Augmented and Virtual Reality. However, the data analysis process is often not clearly described and the results are often simply listed in the form of interesting quotes from or summaries of quotes that were uttered by study participants. This position paper proposes an early concept for the use of a researcher as an \u201cAdvocatus Diaboli\u201d, or devil\u2019s advocate, to try to disprove the results of the data analysis by looking for quotes that contradict the findings or leading questions and task designs. Whatever this devil\u2019s advocate finds can then be used to reiterate on the findings and the analysis process to form more suitable theories. On the other hand, researchers are enabled to clarify why they did not include this in their theory. This process could increase transparency in the qualitative data analysis process and increase trust in these findings, while being mindful of the necessary resources.","authors":[{"affiliations":["University of Applied Sciences Upper Austria, Hagenberg, Austria"],"email":"judith.friedl-knirsch@fh-hagenberg.at","is_corresponding":true,"name":"Judith Friedl-Knirsch"}],"award":"","doi":"","event_id":"w-beliv","event_title":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-beliv-1037","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-beliv0","session_room":"None","session_title":"BELIV","session_uid":"w-beliv","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["BELIV"],"time_stamp":"","title":"Position paper: Proposing the use of an \u201cAdvocatus Diaboli\u201d as a pragmatic approach to improve transparency in qualitative data analysis and reporting","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-nlviz-1004","abstract":"Large Language Models (LLMs) have been widely applied in summarization due to their speedy and high-quality text generation. Summarization for sensemaking involves information compression and insight extraction. Human guidance in sensemaking tasks can prioritize and cluster relevant information for LLMs. However, users must translate their cognitive thinking into natural language to communicate with LLMs. Can we use more readable and operable visual representations to guide the summarization process for sensemaking? Therefore, we propose introducing an intermediate step--a schematic visual workspace for human sensemaking--before the LLM generation to steer and refine the summarization process. We conduct a series of proof-of-concept experiments to investigate the potential for enhancing the summarization by GPT-4 through visual workspaces. Leveraging a textual sensemaking dataset with a ground truth summary, we evaluate the impact of a human-generated visual workspace on LLM-generated summarization of the dataset and assess the effectiveness of space-steered summarization. We categorize several types of extractable information from typical human workspaces that can be injected into engineered prompts to steer the LLM summarization. The results demonstrate how such workspaces can help align an LLM with the ground truth, leading to more accurate summarization results than without the workspaces.","authors":[{"affiliations":["Computer Science Department, Blacksburg, United States"],"email":"tangxxwhu@gmail.com","is_corresponding":true,"name":"Xuxin Tang"},{"affiliations":["Dod, Laurel, United States"],"email":"ericpkrokos@gmail.com","is_corresponding":false,"name":"Eric Krokos"},{"affiliations":["Department of Defense, College Park, United States"],"email":"visual.tycho@gmail.com","is_corresponding":false,"name":"Kirsten Whitley"},{"affiliations":["City University of Hong Kong, Hong Kong, China"],"email":"canliu@cityu.edu.hk","is_corresponding":false,"name":"Can Liu"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"naren@cs.vt.edu","is_corresponding":false,"name":"Naren Ramakrishnan"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"award":"","doi":"","event_id":"w-nlviz","event_title":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-nlviz-1004","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-nlviz0","session_room":"None","session_title":"MLVIZ","session_uid":"w-nlviz","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["MLVIZ"],"time_stamp":"","title":"Steering LLM Summarization with Visual Workspaces for Sensemaking","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-nlviz-1007","abstract":"We explore the use of segmentation and summarization methods for the generation of real-time conversation topic timelines, in the context of glanceable Augmented Reality (AR) visualization. Conversation timelines may serve to summarize and contextualize conversations as they are happening, helping to keep conversations on track. Because dialogue and conversations are broad and unpredictable by nature, and our processing is being done in real-time, not all relevant information may be present in the text at the time it is processed. Thus, we present considerations and challenges which may not be as prevalent in traditional implementations of topic classification and dialogue segmentation. Furthermore, we discuss how AR visualization requirements and design practices require an additional layer of decision making, which must be factored directly into the text processing algorithms. We explore three segmentation strategies -- using dialogue segmentation based on the text of the entire conversation, segmenting on 1-minute intervals, and segmenting on 10-second intervals -- and discuss our results.","authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"shanna.hollingwor1@ucalgary.ca","is_corresponding":true,"name":"Shanna Li Ching Hollingworth"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"wj@wjwillett.net","is_corresponding":false,"name":"Wesley Willett"}],"award":"","doi":"","event_id":"w-nlviz","event_title":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-nlviz-1007","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-nlviz0","session_room":"None","session_title":"MLVIZ","session_uid":"w-nlviz","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["MLVIZ"],"time_stamp":"","title":"Towards Real-Time Speech Segmentation for Glanceable Conversation Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-nlviz-1008","abstract":"Academic literature reviews have traditionally relied on techniques such as keyword searches and accumulation of relevant back-references, using databases like Google Scholar or IEEEXplore. However, both the precision and accuracy of these search techniques is limited by the presence or absence of specific keywords, making literature review akin to searching for needles in a haystack. We present vitaLITy 2, a solution that uses a Large Language Model or LLM-based approach to identify semantically relevant literature in a textual embedding space. We include a corpus of 66,692 papers from 1970-2023 which are searchable through text embeddings created by three language models. vitaLITy 2 contributes a novel Retrieval Augmented Generation (RAG) architecture and can be interacted with through an LLM with augmented prompts, including summarization of a collection of papers. vitaLITy 2 also provides a chat interface that allow users to perform complex queries without learning any new programming language. This also enables users to take advantage of the knowledge captured in the LLM from its enormous training corpus. Finally, we demonstrate the applicability of vitaLITy 2 through two usage scenarios.","authors":[{"affiliations":["University of Nottingham, Nottingham, United Kingdom"],"email":"psxah15@nottingham.ac.uk","is_corresponding":false,"name":"Hongye An"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"arpitnarechania@gatech.edu","is_corresponding":true,"name":"Arpit Narechania"},{"affiliations":["University of Nottingham, Nottingham, United Kingdom"],"email":"kai.xu@nottingham.ac.uk","is_corresponding":false,"name":"Kai Xu"}],"award":"","doi":"","event_id":"w-nlviz","event_title":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-nlviz-1008","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-nlviz0","session_room":"None","session_title":"MLVIZ","session_uid":"w-nlviz","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["MLVIZ"],"time_stamp":"","title":"vitaLITy 2: Reviewing Academic Literature Using Large Language Models","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-nlviz-1009","abstract":"Analyzing and finding anomalies in multi-dimensional datasets is a cumbersome but vital task across different domains. In the context of financial fraud detection, analysts must quickly identify suspicious activity among transactional data. This is an iterative process made of complex exploratory tasks such as recognizing patterns, grouping, and comparing. To mitigate the information overload inherent to these steps, we present a tool combining automated information highlights, Large Language Model generated textual insights, and visual analytics, facilitating exploration at different levels of detail. We perform a segmentation of the data per analysis area and visually represent each one, making use of automated visual cues to signal which require more attention. Upon user selection of an area, our system provides textual and graphical summaries. The text, acting as a link between the high-level and detailed views of the chosen segment, allows for a quick understanding of relevant details. A thorough exploration of the data comprising the selection can be done through graphical representations. The feedback gathered in a study performed with seven domain experts suggests our tool effectively supports and guides exploratory analysis, easing the identification of suspicious information.","authors":[{"affiliations":["Feedzai, Lisbon, Portugal"],"email":"beatriz.feliciano@feedzai.com","is_corresponding":true,"name":"Beatriz Feliciano"},{"affiliations":["Feedzai, Lisbon, Portugal"],"email":"rita.costa@feedzai.com","is_corresponding":false,"name":"Rita Costa"},{"affiliations":["Feedzai, Porto, Portugal"],"email":"jean.alves@feedzai.com","is_corresponding":false,"name":"Jean Alves"},{"affiliations":["Feedzai, Madrid, Spain"],"email":"javier.liebana@feedzai.com","is_corresponding":false,"name":"Javier Li\u00e9bana"},{"affiliations":["Feedzai, Lisbon, Portugal"],"email":"diogo.duarte@feedzai.com","is_corresponding":false,"name":"Diogo Ramalho Duarte"},{"affiliations":["Feedzai, Lisbon, Portugal"],"email":"pedro.bizarro@feedzai.com","is_corresponding":false,"name":"Pedro Bizarro"}],"award":"","doi":"","event_id":"w-nlviz","event_title":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-nlviz-1009","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-nlviz0","session_room":"None","session_title":"MLVIZ","session_uid":"w-nlviz","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["MLVIZ"],"time_stamp":"","title":"\u201cShow Me What\u2019s Wrong!\u201d: Combining Charts and Text to Guide Data Analysis","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-nlviz-1010","abstract":"Dimension reduction (DR) can transform high-dimensional text embeddings into a 2D visual projection facilitating the exploration of document similarities. However, the projection often lacks connection to the text semantics, due to the opaque nature of text embeddings and non-linear dimension reductions. To address these problems, we propose a gradient-based method for visualizing the spatial semantics of dimensionally reduced text embeddings. This method employs gradients to assess the sensitivity of the projected documents with respect to the underlying words. The method can be applied to existing DR algorithms and text embedding models. Using these gradients, we designed a visualization system that incorporates spatial word clouds into the document projection space to illustrate the impactful text features. We further present three usage scenarios that demonstrate the practical applications of our system to facilitate the discovery and interpretation of underlying semantics in text projections.","authors":[{"affiliations":["Computer Science, Virginia Tech, Blacksburg, United States"],"email":"wliu3@vt.edu","is_corresponding":false,"name":"Wei Liu"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":true,"name":"Rebecca Faust"}],"award":"","doi":"","event_id":"w-nlviz","event_title":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-nlviz-1010","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-nlviz0","session_room":"None","session_title":"MLVIZ","session_uid":"w-nlviz","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["MLVIZ"],"time_stamp":"","title":"Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-nlviz-1011","abstract":"Recently, large language models (LLMs) have shown great promise in translating natural language (NL) queries into visualizations, but their \u201cblack-box\u201d nature often limits explainability and debuggability. In response, we present a comprehensive text prompt that, given a tabular dataset and an NL query about the dataset, generates an analytic specification including (detected) data attributes, (inferred) analytic tasks, and (recommended) visualizations. This specification captures key aspects of the query translation process, affording both explainability and debuggability. For instance, it provides mappings from the detected entities to the corresponding phrases in the input query, as well as the specific visual design principles that determined the visualization recommendations. Moreover, unlike prior LLM-based approaches, our prompt supports conversational interaction and ambiguity detection capabilities. In this paper, we detail the iterative process of curating our prompt, present a preliminary performance evaluation using GPT-4, and discuss the strengths and limitations of LLMs at various stages of query translation.","authors":[{"affiliations":["UNC Charlotte, Charlotte, United States"],"email":"ssah1@uncc.edu","is_corresponding":true,"name":"Subham Sah"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"rmitra34@gatech.edu","is_corresponding":false,"name":"Rishab Mitra"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"arpitnarechania@gatech.edu","is_corresponding":false,"name":"Arpit Narechania"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"endert@gatech.edu","is_corresponding":false,"name":"Alex Endert"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"john.stasko@cc.gatech.edu","is_corresponding":false,"name":"John Stasko"},{"affiliations":["UNC Charlotte, Charlotte, United States"],"email":"wdou1@uncc.edu","is_corresponding":false,"name":"Wenwen Dou"}],"award":"","doi":"","event_id":"w-nlviz","event_title":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-nlviz-1011","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-nlviz0","session_room":"None","session_title":"MLVIZ","session_uid":"w-nlviz","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["MLVIZ"],"time_stamp":"","title":"Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-nlviz-1016","abstract":"We explore how natural language authoring with large language models (LLMs) can support the inline authoring of word-scale visualizations (WSVs). While word-scale visualizations that live alongside and within document text can support rich integration of data into written narratives and communication, these small visualizations have typically been challenging to author. We explore how modern LLMs---which are able to generate diverse visualization designs based on simple natural language descriptions---might allow authors to specify and insert new visualizations inline as they write text. Drawing on our experiences with an initial prototype built using GPT-4, we highlight the expressive potential of inline natural language visualization authoring and identify opportunities for further research.","authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"paige.sobrien@ucalgary.ca","is_corresponding":true,"name":"Paige So'Brien"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"wj@wjwillett.net","is_corresponding":false,"name":"Wesley Willett"}],"award":"","doi":"","event_id":"w-nlviz","event_title":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-nlviz-1016","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-nlviz0","session_room":"None","session_title":"MLVIZ","session_uid":"w-nlviz","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["MLVIZ"],"time_stamp":"","title":"Towards Inline Natural Language Authoring for Word-Scale Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-nlviz-1019","abstract":"As language models have become increasingly successful at a wide array of tasks, different prompt engineering methods have been developed alongside them in order to adapt these models to new tasks. One of them is Tree-of-Thoughts (ToT), a prompting strategy and framework for language model inference and problem-solving. It allows the model to explore multiple solution paths and select the best course of action, producing a tree-like structure of intermediate steps (i.e., thoughts). This method was shown to be effective for several problem types. However, the official implementation has a high barrier to usage as it requires setup overhead and incorporates task-specific problem templates which are difficult to generalize to new problem types. It also does not allow user interaction to improve or suggest new thoughts. We introduce iToT (interactive Tree-of- Thoughts), a generalized and interactive Tree of Thought prompting system. iToT allows users to explore each step of the model\u2019s problem-solving process as well as to correct and extend the model\u2019s thoughts. iToT revolves around a visual interface that facilitates simple and generic ToT usage and transparentizes the problem-solving process to users. This facilitates a better understanding of which thoughts and considerations lead to the model\u2019s final decision. Through two case studies, we demonstrate the usefulness of iToT in different human-LLM co-writing tasks.","authors":[{"affiliations":["ETHZ, Zurich, Switzerland"],"email":"aboyle@student.ethz.ch","is_corresponding":false,"name":"Alan David Boyle"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"igupta@ethz.ch","is_corresponding":true,"name":"Isha Gupta"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"shoenig@student.ethz.ch","is_corresponding":false,"name":"Sebastian H\u00f6nig"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"lukas.mautner98@gmail.com","is_corresponding":false,"name":"Lukas Mautner"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"kenza.amara@ai.ethz.ch","is_corresponding":false,"name":"Kenza Amara"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"furui.cheng@inf.ethz.ch","is_corresponding":false,"name":"Furui Cheng"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"melassady@ai.ethz.ch","is_corresponding":false,"name":"Mennatallah El-Assady"}],"award":"","doi":"","event_id":"w-nlviz","event_title":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-nlviz-1019","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-nlviz0","session_room":"None","session_title":"MLVIZ","session_uid":"w-nlviz","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["MLVIZ"],"time_stamp":"","title":"iToT: An Interactive System for Customized Tree-of-Thought Generation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-nlviz-1020","abstract":"Strategy management analyses are created by business consultants with common analysis frameworks (i.e. comparative analyses) and associated diagrams. We show these can be largely constructed using LLMs, starting with the extraction of insights from data, organization of those insights according to a strategy management framework, and then depiction in the typical strategy management diagram for that framework (static textual visualizations). We discuss caveats and future directions to generalize for broader uses.","authors":[{"affiliations":["Uncharted Software, Toronto, Canada"],"email":"richard.brath@alumni.utoronto.ca","is_corresponding":true,"name":"Richard Brath"},{"affiliations":["Uncharted Software, Toronto, Canada"],"email":"miltonjbradley@gmail.com","is_corresponding":false,"name":"Adam James Bradley"},{"affiliations":["Uncharted Software, Toronto, Canada"],"email":"david@jonker.work","is_corresponding":false,"name":"David Jonker"}],"award":"","doi":"","event_id":"w-nlviz","event_title":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-nlviz-1020","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-nlviz0","session_room":"None","session_title":"MLVIZ","session_uid":"w-nlviz","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["MLVIZ"],"time_stamp":"","title":"Strategic management analysis: from data to strategy diagram by LLM","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-nlviz-1021","abstract":"We present a mixed-methods study to explore how large language models (LLMs) can assist users in the visual exploration and analysis of complex data structures, using knowledge graphs (KGs) as a baseline. We surveyed and interviewed 20 professionals who regularly work with LLMs with the goal of using them for (or alongside) KGs. From the analysis of our interviews, we contribute a preliminary roadmap for the design of LLM-driven visual analysis systems and outline future opportunities in this emergent design space.","authors":[{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"harry.li@ll.mit.edu","is_corresponding":true,"name":"Harry Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"ashley.suh@ll.mit.edu","is_corresponding":false,"name":"Ashley Suh"}],"award":"","doi":"","event_id":"w-nlviz","event_title":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-nlviz-1021","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-nlviz0","session_room":"None","session_title":"MLVIZ","session_uid":"w-nlviz","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["MLVIZ"],"time_stamp":"","title":"A Preliminary Roadmap for LLMs as Visual Data Analysis Assistants","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-nlviz-1022","abstract":"This study explores the potential of visual representation in understanding the structural elements of Arabic poetry, a subject of significant educational and research interest. Our objective is to make Arabic poetic works more accessible to readers of both Arabic and non-Arabic linguistic backgrounds by employing visualization, exploration, and analytical techniques. We transformed poetry texts into syllables, identified their metrical structures, segmented verses into patterns, and then converted these patterns into visual representations. Following this, we computed and visualized the dissimilarities between these images, and overlaid their differences. Our findings suggest that the positional patterns across a poem play a pivotal role in effective poetry clustering, as demonstrated by our newly computed metrics. The results of our clustering experiments showed a marked improvement over previous attempts, thereby providing new insights into the composition and structure of Arabic poetry. This study underscored the value of visual representation in enhancing our understanding of Arabic poetry.","authors":[{"affiliations":["University of Neuch\u00e2tel, Neuch\u00e2tel, Switzerland"],"email":"abdelmalek.berkani@unine.ch","is_corresponding":true,"name":"Abdelmalek Berkani"},{"affiliations":["University of Neuch\u00e2tel, Neuch\u00e2tel, Switzerland"],"email":"adrian.holzer@unine.ch","is_corresponding":false,"name":"Adrian Holzer"}],"award":"","doi":"","event_id":"w-nlviz","event_title":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-nlviz-1022","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-nlviz0","session_room":"None","session_title":"MLVIZ","session_uid":"w-nlviz","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["MLVIZ"],"time_stamp":"","title":"Enhancing Arabic Poetic Structure Analysis through Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-uncertainty-1007","abstract":"Symmetric second-order tensors are fundamental in various scientific and engineering domains, as they can represent properties such as material stresses or diffusion processes in brain tissue. In recent years, several approaches have been introduced and improved to analyze these fields using topological features, such as degenerate tensor locations, i.e., the tensor has repeated eigenvalues, or normal surfaces. Traditionally, the identification of such features has been limited to single tensor fields. However, it has become common to create ensembles to account for uncertainties and variability in simulations and measurements. In this work, we explore novel methods for describing and visualizing degenerate tensor locations in 3D symmetric second-order tensor field ensembles. We base our considerations on the tensor mode and analyze its practicality in characterizing the uncertainty of degenerate tensor locations before proposing a variety of visualization strategies to effectively communicate degenerate tensor information. We demonstrate our techniques for synthetic and simulation data sets.The results indicate that the interplay of different descriptions for uncertainty can effectively convey information on degenerate tensor locations.","authors":[{"affiliations":["University of Cologne, Cologne, Germany"],"email":"tadea.schmitz@uni-koeln.de","is_corresponding":false,"name":"Tadea Schmitz"},{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"gerrits@vis.rwth-aachen.de","is_corresponding":true,"name":"Tim Gerrits"}],"award":"","doi":"","event_id":"w-uncertainty","event_title":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-uncertainty-1007","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-uncertainty0","session_room":"None","session_title":"Uncertainty Visualization","session_uid":"w-uncertainty","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Uncertainty Visualization"],"time_stamp":"","title":"Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-uncertainty-1009","abstract":"Understanding and communicating data uncertainty is crucial for informed decision-making across various domains, including finance, healthcare, and public policy. This study investigates the impact of gender and acoustic variables on decision-making, confidence, and trust through a crowdsourced experiment. We compared visualization-only representations of uncertainty to text-forward and speech-forward bimodal representations, including multiple synthetic voices across gender. Speech-forward representations led to an increase in risky decisions, and text-forward representations led to lower confidence. Contrary to prior work, speech-forward forecasts did not receive higher ratings of trust. Higher normalized pitch led to a slight increase in decision confidence, but other voice characteristics had minimal impact on decisions and trust. An exploratory analysis of accented speech showed consistent results with the main experiment and additionally indicated lower trust ratings for information presented in Indian and Kenyan accents. The results underscore the importance of considering acoustic and contextual factors in presentation of data uncertainty.","authors":[{"affiliations":["University of California Berkeley, Berkeley, United States"],"email":"chase_stokes@berkeley.edu","is_corresponding":true,"name":"Chase Stokes"},{"affiliations":["Stanford University, Stanford, United States"],"email":"sanker@stanford.edu","is_corresponding":false,"name":"Chelsea Sanker"},{"affiliations":["Versalytix, Columbus, United States"],"email":"bcogley@versalytix.com","is_corresponding":false,"name":"Bridget Cogley"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"award":"","doi":"","event_id":"w-uncertainty","event_title":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-uncertainty-1009","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-uncertainty0","session_room":"None","session_title":"Uncertainty Visualization","session_uid":"w-uncertainty","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Uncertainty Visualization"],"time_stamp":"","title":"Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-uncertainty-1010","abstract":"The increasing adoption of Deep Neural Networks (DNNs) has led to their application in many challenging scientific visualization tasks. While advanced DNNs offer impressive generalization capabilities, understanding factors such as model prediction quality, robustness, and uncertainty is crucial. These insights can enable domain scientists to make informed decisions about their data. However, DNNs inherently lack ability to estimate prediction uncertainty, necessitating new research to construct robust uncertainty-aware visualization techniques tailored for various visualization tasks. In this work, we propose uncertainty-aware implicit neural representations to model scalar field data sets effectively and comprehensively study the efficacy and benefits of estimated uncertainty information for volume visualization tasks. We evaluate the effectiveness of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout (MCDropout). These techniques enable uncertainty-informed volume visualization in scalar field data sets. Our extensive exploration across multiple data sets demonstrates that uncertainty-aware models produce informative volume visualization results. Moreover, integrating prediction uncertainty enhances the trustworthiness of our DNN model, making it suitable for robustly analyzing and visualizing real-world scientific volumetric data sets.","authors":[{"affiliations":["IIT kanpur , Kanpur , India"],"email":"saklanishanu@gmail.com","is_corresponding":false,"name":"Shanu Saklani"},{"affiliations":["Indian Institute of Technology Kanpur, Kanpur, India"],"email":"chitwangoel1010@gmail.com","is_corresponding":false,"name":"Chitwan Goel"},{"affiliations":["Indian Institute of Technology Kanpur, Kanpur, India"],"email":"shrey.bansal75@gmail.com","is_corresponding":false,"name":"Shrey Bansal"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"jay.wang@rutgers.edu","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India"],"email":"soumya.cvpr@gmail.com","is_corresponding":true,"name":"Soumya Dutta"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"award":"","doi":"","event_id":"w-uncertainty","event_title":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-uncertainty-1010","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-uncertainty0","session_room":"None","session_title":"Uncertainty Visualization","session_uid":"w-uncertainty","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Uncertainty Visualization"],"time_stamp":"","title":"Uncertainty-Informed Volume Visualization using Implicit Neural Representation","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-uncertainty-1011","abstract":"Current research provides methods to communicate uncertainty and adapts classical algorithms of the visualization pipeline to take the uncertainty into account. Various existing visualization frameworks include methods to present uncertain data but do not offer transformation techniques tailored to uncertain data. Therefore, we propose a software package for uncertainty-aware data analysis in Python (UADAPy) offering methods for uncertain data along the visualization pipeline. We aim to provide a platform that is the foundation for further integration of uncertainty algorithms and visualizations. It provides common utility functionality to support research in uncertainty-aware visualization algorithms and makes state-of-the-art research results accessible to the end user. The project is available at https://github.com/UniStuttgart-VISUS/uadapy.","authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"patrick.paetzold@uni-konstanz.de","is_corresponding":true,"name":"Patrick Paetzold"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"david.haegele@visus.uni-stuttgart.de","is_corresponding":false,"name":"David H\u00e4gele"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"m_ever14@uni-muenster.de","is_corresponding":false,"name":"Marina Evers"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"oliver.deussen@uni-konstanz.de","is_corresponding":false,"name":"Oliver Deussen"}],"award":"","doi":"","event_id":"w-uncertainty","event_title":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-uncertainty-1011","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-uncertainty0","session_room":"None","session_title":"Uncertainty Visualization","session_uid":"w-uncertainty","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Uncertainty Visualization"],"time_stamp":"","title":"UADAPy: An Uncertainty-Aware Visualization and Analysis Toolbox","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-uncertainty-1012","abstract":"Uncertainty visualization is an emerging research topic in data vi- sualization because neglecting uncertainty in visualization can lead to inaccurate assessments. In this short paper, we study the prop- agation of multivariate data uncertainty in visualization. Although there have been a few advancements in probabilistic uncertainty vi- sualization of multivariate data, three critical challenges remain to be addressed. First, state-of-the-art probabilistic uncertainty visual- ization framework is limited to bivariate data (two variables). Sec- ond, the existing uncertainty visualization algorithms use compu- tationally intensive techniques and lack support for cross-platform portability. Third, as a consequence of the computational expense, integration into interactive production visualization tools is imprac- tical. In this work, we address all three issues and make a threefold contribution. First, we generalize the state-of-the-art probabilis- tic framework for bivariate data to multivariate data with a arbi- trary number of variables. Second, through utilization of VTK-m\u2019s shared-memory parallelism and cross-platform compatibility fea- tures, we demonstrate acceleration of multivariate uncertainty visu- alization on different many-core architectures, including OpenMP and AMD GPUs. Third, we demonstrate the integration of our al- gorithms with the ParaView software. We demonstrate utility of our algorithms through experiments on multivariate simulation data.","authors":[{"affiliations":["Indiana University Bloomington, Bloomington, United States"],"email":"gautamhari@outlook.com","is_corresponding":true,"name":"Gautam Hari"},{"affiliations":["Indiana University Bloomington, Bloomington, United States"],"email":"nrushad2001@gmail.com","is_corresponding":false,"name":"Nrushad A Joshi"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"jay.wang@rutgers.edu","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"gongq@ornl.gov","is_corresponding":false,"name":"Qian Gong"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kmorel@acm.org","is_corresponding":false,"name":"Kenneth Moreland"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"klasky@ornl.gov","is_corresponding":false,"name":"Scott Klasky"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pnorbert@ornl.gov","is_corresponding":false,"name":"Norbert Podhorszki"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"}],"award":"","doi":"","event_id":"w-uncertainty","event_title":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-uncertainty-1012","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-uncertainty0","session_room":"None","session_title":"Uncertainty Visualization","session_uid":"w-uncertainty","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Uncertainty Visualization"],"time_stamp":"","title":"FunM^2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-uncertainty-1013","abstract":"Uncertainty is inherent to most data, including vector field data, yet it is often omitted in visualizations and representations. Effective uncertainty visualization can enhance the understanding and interpretability of vector field data. For instance, in the context of severe weather events such as hurricanes and wildfires, effective uncertainty visualization can provide crucial insights about fire spread or hurricane behavior and aid in resource management and risk mitigation. Glyphs are commonly used for representing vector uncertainty but are often limited to 2D. In this work, we present a glyph-based technique for accurately representing 3D vector uncertainty and a comprehensive framework for visualization, exploration, and analysis using our new glyphs. We employ hurricane and wildfire examples to demonstrate the efficacy of our glyph design and visualization tool in conveying vector field uncertainty.","authors":[{"affiliations":["Scientific Computing and Imaging Institute, Salk Lake City, United States"],"email":"touermi@sci.utah.edu","is_corresponding":true,"name":"Timbwaoga A. J. Ouermi"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"zbmorro@sandia.gov","is_corresponding":false,"name":"Zachary Morrow"},{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"bartv@sandia.gov","is_corresponding":false,"name":"Bart van Bloemen Waanders"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"award":"","doi":"","event_id":"w-uncertainty","event_title":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-uncertainty-1013","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-uncertainty0","session_room":"None","session_title":"Uncertainty Visualization","session_uid":"w-uncertainty","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Uncertainty Visualization"],"time_stamp":"","title":"Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Field","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-uncertainty-1014","abstract":"Isosurface visualization is fundamental for exploring and analyzing 3D volumetric data. Marching cubes (MC) algorithms with linear interpolation are commonly used for isosurface extraction and visualization. Although linear interpolation is easy to implement, it has limitations when the underlying data is complex and high-order, which is the case for most real-world data. Linear interpolation can output vertices at the wrong location. Its inability to deal with sharp features and features smaller than grid cells can create holes and broken pieces in the extracted isosurface. Despite these limitations, isosurface visualizations typically do not include insight into the spatial location and the magnitude of these errors. We utilize high-order interpolation methods with MC algorithms and interactive visualization to highlight these uncertainties. Our visualization tool helps identify the regions of high interpolation errors. It also allows users to query local areas for details and compare the differences between isosurfaces from different interpolation methods. In addition, we employ high-order methods to identify and reconstruct possible features that linear methods cannot detect. We showcase how our visualization tool helps explore and understand the extracted isosurface errors through synthetic and real-world data.","authors":[{"affiliations":["Scientific Computing and Imaging Institute, Salk Lake City, United States"],"email":"touermi@sci.utah.edu","is_corresponding":true,"name":"Timbwaoga A. J. Ouermi"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"award":"","doi":"","event_id":"w-uncertainty","event_title":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-uncertainty-1014","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-uncertainty0","session_room":"None","session_title":"Uncertainty Visualization","session_uid":"w-uncertainty","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Uncertainty Visualization"],"time_stamp":"","title":"Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-uncertainty-1015","abstract":"Functional depth is a well-known technique used to derive descriptive statistics (e.g., median, quartiles, and outliers) for 1D data. Surface boxplots extend this concept to ensembles of images, helping scientists and users identify representative and outlier images. However, the computational time for surface boxplots increases cubically with the number of ensemble members, making it impractical for integration into visualization tools. In this paper, we propose a deep-learning solution for efficient depth prediction and computation of surface boxplots for time-varying ensemble data. Our deep learning framework accurately predicts member depths in a surface boxplot, achieving average speedups of 6X on a CPU and 15X on a GPU for the 2D Red Sea dataset with 50 ensemble members compared to the traditional depth computation algorithm. Our approach achieves at least a 99\\% level of rank preservation, with order flipping occurring only at pairs with extremely similar depth values that pose no statistical differences. This local flipping does not significantly impact the overall depth order of the ensemble members.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"mengjiao@sci.utah.edu","is_corresponding":true,"name":"Mengjiao Han"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"award":"","doi":"","event_id":"w-uncertainty","event_title":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-uncertainty-1015","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-uncertainty0","session_room":"None","session_title":"Uncertainty Visualization","session_uid":"w-uncertainty","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Uncertainty Visualization"],"time_stamp":"","title":"Accelerated Depth Computation for Surface Boxplots with Deep Learning","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-uncertainty-1016","abstract":"Wildfire poses substantial risks to our health, environment, and economy. Studying wildfire is challenging due to its complex inter- action with the atmosphere dynamics and the terrain. Researchers have employed ensemble simulations to study the relationship be- tween variables and mitigate uncertainties in unpredictable initial conditions. However, many domain scientists are unaware of the advanced visualization tools available for conveying uncertainty. To bring some uncertainty visualization techniques, we build an interactive visualization system that utilizes a band-depth-based method that provides a statistical summary and visualization for fire front contours from the ensemble. We augment the visualiza- tion system with capabilities to study wildfires as a dynamic system. In this paper, We demonstrate how our system can support domain scientists in studying fire spread patterns, identifying outlier simu- lations, and navigating to interesting instances based on a summary of events.","authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":true,"name":"Jixian Li"},{"affiliations":["Scientific Computing and Imaging Institute, Salk Lake City, United States"],"email":"touermi@sci.utah.edu","is_corresponding":false,"name":"Timbwaoga A. J. Ouermi"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"award":"","doi":"","event_id":"w-uncertainty","event_title":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-uncertainty-1016","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-uncertainty0","session_room":"None","session_title":"Uncertainty Visualization","session_uid":"w-uncertainty","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Uncertainty Visualization"],"time_stamp":"","title":"Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-uncertainty-1017","abstract":"Uncertainty visualization is a key component in translating important insights from ensemble data into actionable decision-making by visually conveying various aspects of uncertainty within a system. With the recent advent of fast surrogate models for computationally expensive simulations, users can interact with more aspects of data spaces than ever before. However, the integration of ensemble data with surrogate models in a decision-making tool brings up new challenges for uncertainty visualization, namely how to reconcile and communicate the new and different types of uncertainties brought in by surrogates and how to utilize these new data estimates in actionable ways. In this work, we examine these issues as they relate to high-dimensional data visualization, the integration of discrete datasets and the continuous representations of those datasets, and the unique difficulties associated with systems that allow users to iterate between input and output spaces. We assess the role of uncertainty visualization in facilitating intuitive and actionable interaction with ensemble data and surrogate models, and highlight key challenges in this new frontier of computational simulation.","authors":[{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"sam.molnar@nrel.gov","is_corresponding":true,"name":"Sam Molnar"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"jd.laurencechasen@nrel.gov","is_corresponding":false,"name":"J.D. Laurence-Chasen"},{"affiliations":["The Ohio State University, Columbus, United States","National Renewable Energy Lab, Golden, United States"],"email":"duan.418@osu.edu","is_corresponding":false,"name":"Yuhan Duan"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"julie.bessac@nrel.gov","is_corresponding":false,"name":"Julie Bessac"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"kristi.potter@nrel.gov","is_corresponding":false,"name":"Kristi Potter"}],"award":"","doi":"","event_id":"w-uncertainty","event_title":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-uncertainty-1017","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-uncertainty0","session_room":"None","session_title":"Uncertainty Visualization","session_uid":"w-uncertainty","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Uncertainty Visualization"],"time_stamp":"","title":"Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-uncertainty-1018","abstract":"Although people frequently make decisions based on uncertain forecasts about future events, there is little guidance about how best to represent the uncertainty in forecasts. One common approach is to use multiple forecast visualizations, in which multiple forecasts are plotted on the same graph. This provides an implicit representation of the uncertainty in the data, but it is not clear how many forecasts to show, or how viewers might be influenced by seeing the more extreme forecasts rather than those closer to the mean. In this study, we showed participants forecasts of wind speed data and they made decisions based on their predictions about the future wind speed. We allowed participants to choose how many forecasts to view prior to making a decision, and we manipulated the ordering of the forecasts and the cost of each additional forecast. We found that participants viewed more forecasts when the outcome was more ambiguous. The order of the forecasts had little impact on their decisions when there was no cost for the additional information. However, when there was a cost for each forecast, the participants were much more likely to make a guess based on only the first forecast shown. In this case, showing one of the extreme forecasts first led to less optimal decisions.","authors":[{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"lematze@sandia.gov","is_corresponding":true,"name":"Laura Matzen"},{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"mcstite@sandia.gov","is_corresponding":false,"name":"Mallory C Stites"},{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"kmdivis@sandia.gov","is_corresponding":false,"name":"Kristin M Divis"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"abendeck3@gatech.edu","is_corresponding":false,"name":"Alexander Bendeck"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"john.stasko@cc.gatech.edu","is_corresponding":false,"name":"John Stasko"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"}],"award":"","doi":"","event_id":"w-uncertainty","event_title":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-uncertainty-1018","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-uncertainty0","session_room":"None","session_title":"Uncertainty Visualization","session_uid":"w-uncertainty","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Uncertainty Visualization"],"time_stamp":"","title":"Effects of Forecast Number, Order, and Cost in Multiple Forecast Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-uncertainty-1019","abstract":"We present a simple comparative framework for testing and developing uncertainty modeling in uncertain marching cubes implementations. The selection of a model to represent the probability distribution of uncertain values directly influences the memory use, run time, and accuracy of an uncertainty visualization algorithm. We use an entropy calculation directly on ensemble data to establish an expected result and then compare the entropy from various probability models, including uniform, Gaussian, histogram, and quantile models. Our results verify that models matching the distribution of the ensemble indeed match the entropy. We further show that fewer bins in nonparametric histogram models are more effective whereas large numbers of bins in quantile models approach data accuracy.","authors":[{"affiliations":["University of Illinois Urbana-Champaign, Urbana, United States"],"email":"sisneros@illinois.edu","is_corresponding":true,"name":"Robert Sisneros"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kmorel@acm.org","is_corresponding":false,"name":"Kenneth Moreland"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"}],"award":"","doi":"","event_id":"w-uncertainty","event_title":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-uncertainty-1019","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-uncertainty0","session_room":"None","session_title":"Uncertainty Visualization","session_uid":"w-uncertainty","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Uncertainty Visualization"],"time_stamp":"","title":"An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1007","abstract":"Visualizations are a critical medium not only for telling stories, but for fostering exploration. But while there are countless examples how to use visualizations for\u201cstorytelling with data,\u201d there are few guidelines on how to design visualizations for public exploration. This educator report draws on decades of work in science museums, a public context focused on designing interactive experiences for exploration, to provide evidence-based guidelines for designing exploratory visualizations. Recent studies on interactive visualizations in museums are contextualized within a larger body of museum research on designs that support exploratory learning in interactive exhibits. Synthesizing these studies highlights that to create successful exploratory visualizations, designers can apply long-standing guidelines from exhibit design but need to provide more aids for interpretation.","authors":[{"affiliations":["Science Communication Lab, Berkeley, United States","University of California, San Francisco, San Francisco, United States"],"email":"jafrazier@gmail.com","is_corresponding":true,"name":"Jennifer Frazier"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1007","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"Beyond storytelling with data: Guidelines for designing exploratory visualizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1008","abstract":"With the increasing amount of data globally, analyzing and visualizing data are becoming essential skills across various professions. It is important to equip university students with these essential data skills. To learn, design, and develop data visualization, students need knowledge of programming and data science topics. Many university programs lack dedicated data science courses for undergraduate students, making it important to introduce these concepts through integrated courses. However, combining data science and data visualization into one course can be challenging due to the time constraints and the heavy load of learning. In this paper, we discuss the development of teaching data science and data visualization together in one course and share the results of the post-course evaluation survey. From the survey's results, we identified four challenges, including difficulty in learning multiple tools and diverse data science topics, varying proficiency levels with tools and libraries, and selecting and cleaning datasets. We also distilled five opportunities for developing a successful data science and visualization course. These opportunities include clarifying the course structure, emphasizing visualization literacy early in the course, updating the course content according to student needs, using large real-world datasets, learning from industry professionals, and promoting collaboration among students.","authors":[{"affiliations":["Carleton University, Ottawa, Canada"],"email":"shrihariniramesh@cmail.carleton.ca","is_corresponding":true,"name":"Shri Harini Ramesh"},{"affiliations":["Carleton University, Ottawa, Canada","Bruyere Research Institute, Ottawa, Canada"],"email":"fateme.rajabiyazdi@carleton.ca","is_corresponding":false,"name":"Fateme Rajabiyazdi"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1008","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"Challenges and Opportunities of Teaching Data Visualization Together with Data Science","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1010","abstract":"This report examines the implementation of the Solution Framework in a social impact project facilitated by VizForSocialGood. It outlines the data visualization process, detailing each stage and offering practical insights. The framework's application demonstrates its effectiveness in enhancing project quality, efficiency, and collaboration, making it a valuable tool for educational and professional environments.","authors":[{"affiliations":["Independent Information Designer, Medellin, Colombia","Independent Information Designer, Medellin, Colombia"],"email":"munozdataviz@gmail.com","is_corresponding":true,"name":"Victor Mu\u00f1oz"},{"affiliations":["Corporate Information Designer, Arlington Hts, United States","Corporate Information Designer, Arlington Hts, United States"],"email":"hellokevinford@gmail.com","is_corresponding":false,"name":"Kevin Ford"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1010","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"Implementing the Solution Framework in a Social Impact Project","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1013","abstract":"Academic advising can positively impact struggling students' success. We developed AdVizor, a data-driven learning analytics tool for academic risk prediction for advisors. Our system is equipped with a random forest model for grade prediction probabilities uses a visualization dashboard to allows advisors to interpret model predictions. We evaluated our system in mock advising sessions with academic advisors and undergraduate students at our university. Results show that the system can easily integrate into the existing advising workflow, and visualizations of model outputs can be learned through short training sessions. AdVizor supports and complements the existing expertise of the advisor while helping to facilitate advisor-student discussion and analysis. Advisors found the system assisted them in guiding student course selection for the upcoming semester. It allowed them to guide students to prioritize the most critical and impactful courses. Both advisors and students perceived the system positively and were interested in using the system in the future. Our results encourage the development of intelligent advising systems in higher education, catered for advisors.","authors":[{"affiliations":["Ontario Tech University, Oshawa, Canada"],"email":"riley.weagant@ontariotechu.net","is_corresponding":false,"name":"Riley Weagant"},{"affiliations":["Ontario Tech University, Oshawa, Canada"],"email":"zixin.zhao@ontariotechu.net","is_corresponding":true,"name":"Zixin Zhao"},{"affiliations":["Ontario Tech University, Oshawa, Canada"],"email":"abradley@uncharted.software","is_corresponding":false,"name":"Adam Badley"},{"affiliations":["Ontario Tech University, Oshawa, Canada"],"email":"christopher.collins@ontariotechu.ca","is_corresponding":false,"name":"Christopher Collins"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1013","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"AdVizor: Using Visual Explanations to Guide Data-Driven Student Advising","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1015","abstract":"The integration of visualization in computing education has emerged as a promising strategy to enhance student understanding and engagement in complex computing concepts. Motivated by the need to explore effective teaching methods, this research systematically reviews the applications of visualization tools in computing education, aiming to identify gaps and opportunities for future research. We conducted a systematic literature review using papers from Semantic Scholar and Web of Science, and using a refined set of keywords to gather relevant studies. Our search yielded 288 results, which were systematically filtered to include 90 papers. Data extraction focused on publication details, research methods, key findings, future research suggestions, and research categories. Our review identified a diverse range of visualization tools and techniques used across different areas of computing education, including algorithms, programming, online learning, and problem-solving. The findings highlight the effectiveness of these tools in improving student engagement, understanding, and learning outcomes. However, there is a need for rigorous evaluations and the development of new models tailored to specific learning difficulties. By identifying effective visualization techniques and areas for further investigation, this review encourages the continued development and integration of visual tools in computing education to support the advancement of teaching methodologies","authors":[{"affiliations":["University of Toronto, Toronto, Canada"],"email":"naaz.sibia@utoronto.ca","is_corresponding":true,"name":"Naaz Sibia"},{"affiliations":["University of Toronto Mississauga, Mississauga, Canada"],"email":"michael.liut@utoronto.ca","is_corresponding":false,"name":"Michael Liut"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"cnobre@cs.toronto.edu","is_corresponding":false,"name":"Carolina Nobre"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1015","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"Exploring the Role of Visualization in Enhancing Computing Education: A Systematic Literature Review","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1017","abstract":"The digitalisation of organisations has transformed the way organisations view data. All employees are expected to be data literate and managers are expected to make data-driven decisions [1]. The ability to analyse and visualize the data is a crucial skill set expected from every decision-maker. To help managers develop the skill of data visualization, business schools across the world offer courses in data visualization. From an educator\u2019s perspective, one key decision that he/she must take while designing a visualization course for management students is the software tool to use in the course. Existing literature on data visualization in the scientific community is primarily focused on tools used by researchers or computer scientists ([3], [4]). In [5] the authors evaluate the landscape of commercially available visual analytics systems. In business-related publications like Harvard Business Review, the focus is more on selecting the right chart or on designing effective visualization ([6], [7]). There is a lack of literature to guide educators in teaching visualization to management students. This article attempts to guide educators teaching visualization to management students on how to select the appropriate software tool for their course.","authors":[{"affiliations":["Indian institute of management indore, Indore, India"],"email":"sanjogr@iimidr.ac.in","is_corresponding":true,"name":"Sanjog Ray"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1017","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"Visualization Software: How to Select the Right Software for Teaching Visualization.","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1018","abstract":"In this article, we discuss an experience with design and situated learning in the Creative Data Visualization course, part of the Visual Communication Design undergraduate program at the Federal University of Rio de Janeiro, a free, public Brazilian university that, thanks to affirmative action policies, has become more inclusive over the years. We begin with a brief introduction to the terms Situated Knowledge, coined by Donna Haraway, Situated Design, based on the former concept, and Situated Learning. We then examine the similarities and differences between these notions and the term Situated Visualization to present a model for the concept of Situated Learning in Information Visualization. Following this foundation, we describe the applied methodology, emphasizing the importance of integrating real-world contexts into students\u2019 projects. As a case study, we present three student projects produced as final assignments for the course. Through this article, we aim to underscore the articulation of situated design concepts in information visualization activities and contribute to teaching and learning practices in this field, particularly within the Global South.","authors":[{"affiliations":["Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil"],"email":"doriskos@eba.ufrj.br","is_corresponding":true,"name":"Doris Kosminsky"},{"affiliations":["Federal University of Rio de Janeiro, Rio de Janeiro, Brazil"],"email":"renata.perim@ufrj.br","is_corresponding":false,"name":"Renata Perim Lopes"},{"affiliations":["UFRJ, RJ, Brazil","IBGE, RJ, Brazil"],"email":"regina.reznik@ufrj.br","is_corresponding":false,"name":"Regina Reznik"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1018","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"Teaching Information Visualization through Situated Design: Case Studies from the Classroom","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1019","abstract":"The integration of data visualization in journalism has catalyzed the growth of data storytelling in recent years. Today, it is increasingly common for journalism schools to incorporate data visualization into their curricula. However, the approach to teaching data visualization in journalism schools can diverge significantly from that in computer science or design schools, influenced by the varied backgrounds of students and the distinct value systems inherent to these disciplines. This paper reviews my experience and reflections on teaching data visualization in a journalism school. First, I discuss the prominent characteristics of journalism education that pose challenges for course design and teaching. Then, I share firsthand teaching experiences related to each characteristic and recommend approaches for effective teaching.","authors":[{"affiliations":["Fudan University, Shanghai, China"],"email":"xingyulan96@gmail.com","is_corresponding":true,"name":"Xingyu Lan"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1019","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"Reflections on Teaching Data Visualization at the Journalism School","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1020","abstract":"In this paper, we discuss our experiences advancing a professional-oriented graduate program in Cartography & GIScience at the University of Wisconsin-Madison to account for fundamental shifts in conceptual framings, rapidly evolving mapping technologies, and diverse student needs. We focus our attention on considerations for the cartography curriculum given its relevance to (geo)visualization education and map literacy. We reflect on challenges associated with, and lessons learned from, developing a comprehensive and cohesive cartography curriculum across in-person and online learning modalities for a wide range of professional student audiences.","authors":[{"affiliations":["University of Wisconsin-Madison, Madison, United States"],"email":"jknelson3@wisc.edu","is_corresponding":true,"name":"Jonathan Nelson"},{"affiliations":["University of Wisconsin-Madison, Madison, United States"],"email":"limpisathian@wisc.edu","is_corresponding":false,"name":"P. William Limpisathian"},{"affiliations":["University of Wisconsin-Madison, Madison, United States"],"email":"reroth@wisc.edu","is_corresponding":false,"name":"Robert Roth"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1020","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"Developing a Robust Cartography Curriculum to Train the Professional Cartographer","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1026","abstract":"For over half a century, science centers have been key in communicating science, aiming to increase interest and curiosity in STEM, and promote lifelong learning. Science centers integrate interactive technologies like dome displays, touch tables, VR and AR for immersive learning. Visitors can explore complex phenomena, such as conducting a virtual autopsy. Also, the shift towards digitally interactive exhibits has expanded science centers beyond physical locations to virtual spaces, extending their reach into classrooms. Our investigation revealed several key factors for impactful school visits involving interactive data visualization such as full-dome movies, provide unique perspectives about vast and microscopic phenomena. Hands-on discovery allows pupils to manipulate and investigate data, leading to deeper engagement. Collaborative interaction fosters active learning through group participation. Additionally, clear curriculum connections ensure that visits are pedagogically meaningful. We propose a three-stage model for school visits. The \"Experience\" stage involves immersive visual experiences to spark interest. The \"Engagement\" stage builds on this by providing hands-on interaction with data visualization exhibits. The \"Applicate\" stage offers opportunities to apply and create using data visualization. A future goal of the model is to broaden STEM reach, enabling pupils to benefit from data visualization experiences even if they cannot visit centers.","authors":[{"affiliations":["Link\u00f6ping university, Norrk\u00f6ping, Sweden"],"email":"andreas.c.goransson@liu.se","is_corresponding":true,"name":"Andreas G\u00f6ransson"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1026","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"What makes school visits to digital science centers successful?","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1027","abstract":"Parallel coordinate plots (PCPs) are gaining popularity in data exploration, statistical analysis, predictive analysis along with for data-driven storytelling. In this paper, we present the results of a post-hoc analysis of a dataset from a PCP literacy intervention to identify barriers to PCP literacy. We analyzed question responses and inductively identified barriers to PCP literacy. We performed group coding on each individual response and identified new barriers to PCP literacy. Based on our analysis, we present a extended and enhanced list of barriers to PCP literacy. Our findings have implications towards educational interventions targeting PCP literacy and can provide an approach for students to learn about PCPs through active learning.","authors":[{"affiliations":["University of San Francisco, San Francisco, United States"],"email":"csrinivas2@dons.usfca.edu","is_corresponding":false,"name":"Chandana Srinivas"},{"affiliations":["Cukurova University, Adana, Turkey"],"email":"elifemelfirat@gmail.com","is_corresponding":false,"name":"Elif E. Firat"},{"affiliations":["University of Nottingham, Nottingham, United Kingdom"],"email":"robert.laramee@nottingham.ac.uk","is_corresponding":false,"name":"Robert S. Laramee"},{"affiliations":["University of San Francisco, San Francisco, United States"],"email":"apjoshi@usfca.edu","is_corresponding":true,"name":"Alark Joshi"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1027","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"An Inductive Approach for Identification of Barriers to PCP Literacy","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1028","abstract":"With the decreasing cost of consumer display technologies making it easier for universities to have larger displays in classrooms, and the ubiquitous use of online tools such as collaborative whiteboards for remote learning during the COVID-19 pandemic, combining the two can be useful in higher education. This is especially true in visually intensive classes, such as data visualization courses, that can benefit from additional \"space to teach,\" coined after the \"space to think\" sense-making idiom. In this paper, we reflect on our approach to using SAGE3, a collaborative whiteboard with advanced features, in higher education to teach visually intensive classes, provide examples of activities from our own visually-intensive courses, and present student feedback. We gather our observations into usage patterns for using content-rich canvases in education.","authors":[{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"jessemh@vt.edu","is_corresponding":true,"name":"Jesse Harden"},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"nuritk@hawaii.edu","is_corresponding":false,"name":"Nurit Kirshenbaum"},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"tabalbar@hawaii.edu","is_corresponding":false,"name":"Roderick S Tabalba Jr."},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"rtheriot@hawaii.edu","is_corresponding":false,"name":"Ryan Theriot"},{"affiliations":["The University of Hawai'i at M\u0101noa, Honolulu, United States"],"email":"mlr2010@hawaii.edu","is_corresponding":false,"name":"Michael L. Rogers"},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"mahdi@hawaii.edu","is_corresponding":false,"name":"Mahdi Belcaid"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"},{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"renambot@uic.edu","is_corresponding":false,"name":"Luc Renambot"},{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"llong4@uic.edu","is_corresponding":false,"name":"Lance Long"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"ajohnson@uic.edu","is_corresponding":false,"name":"Andrew E Johnson"},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"leighj@hawaii.edu","is_corresponding":false,"name":"Jason Leigh"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1028","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"Space to Teach: Content-Rich Canvases for Visually-Intensive Education","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1029","abstract":"Data-art blends visualisation, data science, and artistic expression. It allows people to transform information and data into exciting and interesting visual narratives. Hosting a public data-art hands-on workshop enables participants to engage with data and learn fundamental visualisation techniques. However, being a public event, it presents a range of challenges. We outline our approach to organising and conducting a public workshop, that caters to a wide age range, from children to adults. We divide the tutorial into three sections, focusing on data, sketching skills and visualisation. We place emphasis on public engagement, and ensure that participants have fun while learning new skills.","authors":[{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"j.c.roberts@bangor.ac.uk","is_corresponding":true,"name":"Jonathan C Roberts"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1029","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"Engaging Data-Art: Conducting a Public Hands-On Workshop","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1030","abstract":"We propose to leverage the recent development in Large Language Models, in combination to data visualization software and devices in science centers and schools in order to foster more personalized learning experiences. The main goal with our endeavour is to provide to pupils and visitors the same experience they would get with a professional facilitator when interacting with data visualizations of complex scientific phenomena. We describe the results from our early prototypes and the intended implementation and testing of our idea.","authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":true,"name":"Lonni Besan\u00e7on"},{"affiliations":["LiU Link\u00f6ping Universitet, Norrk\u00f6ping, Sweden"],"email":"mathis.brossier@liu.se","is_corresponding":false,"name":"Mathis Brossier"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"omar.mena@kaust.edu.sa","is_corresponding":false,"name":"Omar Mena"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"erik.sunden@liu.se","is_corresponding":false,"name":"Erik Sund\u00e9n"},{"affiliations":["Link\u00f6ping university, Norrk\u00f6ping, Sweden"],"email":"andreas.c.goransson@liu.se","is_corresponding":false,"name":"Andreas G\u00f6ransson"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"anders.ynnerman@liu.se","is_corresponding":false,"name":"Anders Ynnerman"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1030","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"TellUs \u2013 Leveraging the power of LLMs with visualization to benefit science centers.","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-eduvis-1031","abstract":"In this reflective essay, we explore how educational science can be relevant for visualization research, addressing beneficial intersections between the two communities. While visualization has become integral to various areas, including education, our own ongoing collaboration has induced reflections and discussions we believe could benefit visualization research. In particular, we identify five key perspectives: surpassing traditional evaluation metrics by incorporating established educational measures; defining constructs based on existing learning and educational research frameworks; applying established cognitive theories to understand interpretation and interaction with visualizations; establishing uniform terminology across disciplines; and, fostering interdisciplinary convergence. We argue that by integrating educational research constructs, methodologies, and theories, visualization research can further pursue ecological validity and thereby improve the design and evaluation of visual tools. Our essay emphasizes the potential of intensified and systematic collaborations between educational scientists and visualization researchers to advance both fields, and in doing so craft visualization systems that support comprehension, retention, transfer, and critical thinking. We argue that this reflective essay serves as a first point of departure for initiating dialogue that, we hope, could help further connect educational science and visualization, by proposing future empirical studies that take advantage of interdisciplinary approaches of mutual gain to both communities.","authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":true,"name":"Lonni Besan\u00e7on"}],"award":"","doi":"","event_id":"w-eduvis","event_title":"EduVis: Workshop on Visualization Education, Literacy, and Activities","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-eduvis-1031","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-eduvis0","session_room":"None","session_title":"EduVis","session_uid":"w-eduvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EduVis"],"time_stamp":"","title":"What Can Educational Science Offer Visualization? A Reflective Essay","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-energyvis-1762","abstract":"Weather can have a significant impact on the power grid. Heat and cold waves lead to increased energy use as customers cool or heat their space, while simultaneously hampering energy production as the environment deviates from ideal operating conditions. Extreme heat has previously melted power cables, while extreme cold can cause vital parts of the energy infrastructure to freeze. Utilities have reserves to compensate for the additional energy use, but in extreme cases which fall outside the forecast energy demand, the impact on the power grid can be severe. In this paper, we present an interactive tool to explore the relationship between weather and power outages. We demonstrate its use with the example of Winter Storm Uri\u2019s impact on Texas in February 2021.","authors":[{"affiliations":["Institute of Computer Science, Leipzig University, Leipzig, Germany"],"email":"nsonga@informatik.uni-leipzig.de","is_corresponding":true,"name":"Baldwin Nsonga"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"andy.berres@gmail.com","is_corresponding":false,"name":"Andy S Berres"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"bobby.jeffers@nrel.gov","is_corresponding":false,"name":"Robert Jeffers"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"caitlyn.clark6@icloud.com","is_corresponding":false,"name":"Caitlyn Clark"},{"affiliations":["University of Kaiserslautern, Kaiserslautern, Germany"],"email":"hagen@cs.uni-kl.de","is_corresponding":false,"name":"Hans Hagen"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"scheuermann@informatik.uni-leipzig.de","is_corresponding":false,"name":"Gerik Scheuermann"}],"award":"","doi":"","event_id":"w-energyvis","event_title":"EnergyVis 2024: 4th Workshop on Energy Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-energyvis-1762","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-energyvis0","session_room":"None","session_title":"EnergyVis","session_uid":"w-energyvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EnergyVis"],"time_stamp":"","title":"Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-energyvis-2646","abstract":"With the growing penetration of inverter-based distributed energy resources and increased loads through electrification, power systems analyses are becoming more important and more complex. Moreover, these analyses increasingly involve the combination of interconnected energy domains with data that are spatially and temporally increasing in scale by orders of magnitude, surpassing the capabilities of many existing analysis and decision-support systems. We present the architectural design, development, and application of a high-resolution web-based visualization environment capable of cross-domain analysis of tens of millions of energy assets, focusing on scalability and performance. Our system supports the exploration, navigation, and analysis of large data from diverse domains such as electrical transmission and distribution systems, mobility and electric vehicle charging networks, communications networks, cyber assets, and other supporting infrastructure. We evaluate this system across multiple use cases, describing the capabilities and limitations of a web-based approach for high-resolution energy system visualizations.","authors":[{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"graham.johnson@nrel.gov","is_corresponding":false,"name":"Graham Johnson"},{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"sam.molnar@nrel.gov","is_corresponding":false,"name":"Sam Molnar"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"nicholas.brunhart-lupo@nrel.gov","is_corresponding":false,"name":"Nicholas Brunhart-Lupo"},{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"kenny.gruchalla@nrel.gov","is_corresponding":true,"name":"Kenny Gruchalla"}],"award":"","doi":"","event_id":"w-energyvis","event_title":"EnergyVis 2024: 4th Workshop on Energy Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-energyvis-2646","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-energyvis0","session_room":"None","session_title":"EnergyVis","session_uid":"w-energyvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EnergyVis"],"time_stamp":"","title":"Architecture for Web-Based Visualization of Large-Scale Energy Domains","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-energyvis-2743","abstract":"In the pursuit of achieving net-zero greenhouse gas emissions by 2050, policymakers and researchers require sophisticated tools to explore and compare various climate transition scenarios. This paper introduces the Pathways Explorer, an innovative visualization tool designed to facilitate these comparisons by providing an interactive platform that allows users to select, view, and dissect multiple pathways towards sustainability. Developed in collaboration with the \u201cInstitut de l\u2019\u00e9nergie Trottier\u201d (IET), this tool leverages a technoeconomic optimization model to project the energy transformation needed under different constraints and assumptions. We detail the design process that guided the development of the Pathways Explorer, focusing on user-centered design challenges and requirements. A case study is presented to demonstrate how the tool has been utilized by stakeholders to make informed decisions, highlighting its impact and effectiveness. The Pathways Explorer not only enhances understanding of complex climate data but also supports strategic planning by providing clear, comparative visualizations of potential future scenarios.","authors":[{"affiliations":["Kashika Studio, Montreal, Canada"],"email":"francois.levesque@polymtl.ca","is_corresponding":false,"name":"Fran\u00e7ois L\u00e9vesque"},{"affiliations":["Polytechnique Montreal, Montreal, Canada"],"email":"louis.beaumier@polymtl.ca","is_corresponding":false,"name":"Louis Beaumier"},{"affiliations":["Polytechnique Montreal, Montreal, Canada"],"email":"thomas.hurtut@polymtl.ca","is_corresponding":true,"name":"Thomas Hurtut"}],"award":"","doi":"","event_id":"w-energyvis","event_title":"EnergyVis 2024: 4th Workshop on Energy Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-energyvis-2743","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-energyvis0","session_room":"None","session_title":"EnergyVis","session_uid":"w-energyvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EnergyVis"],"time_stamp":"","title":"Pathways Explorer: Interactive Visualization of Climate Transition Scenarios","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-energyvis-2845","abstract":"Methane (CH4) leakage monitoring is crucial for environmental protection and regulatory compliance, particularly in the oil and gas industries. Reducing CH4 emissions helps advance green energy by converting it into a valuable energy source through innovative capture technologies. A real-time continuous monitoring system (CMS) is necessary to detect fugitive and intermittent emissions and provide actionable insights. Integrating spatiotemporal data from satellites, airborne sensors, and ground sensors with inventory data and the weather research and forecasting (WRF) model creates a comprehensive dataset, making CMS feasible but posing significant challenges. These challenges include data alignment and fusion, managing heterogeneity, handling missing values, ensuring resolution integrity, and maintaining geometric and radiometric accuracy. This study outlines the procedure for methane leakage detection, addressing challenges at each step and offering solutions through machine learning and data analysis. It further details how visual analytics can be implemented to improve the effectiveness of the various aspects of emission monitoring.","authors":[{"affiliations":["University of Oklahoma, Norman, United States"],"email":"parisa.masnadi@ou.edu","is_corresponding":true,"name":"Parisa Masnadi Khiabani"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"danala@ou.edu","is_corresponding":false,"name":"Gopichandh Danala"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"wolfgang.jentner@uni-konstanz.de","is_corresponding":false,"name":"Wolfgang Jentner"},{"affiliations":["University of Oklahoma, Oklahoma, United States"],"email":"ebert@ou.edu","is_corresponding":false,"name":"David Ebert"}],"award":"","doi":"","event_id":"w-energyvis","event_title":"EnergyVis 2024: 4th Workshop on Energy Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-energyvis-2845","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-energyvis0","session_room":"None","session_title":"EnergyVis","session_uid":"w-energyvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EnergyVis"],"time_stamp":"","title":"Challenges in Data Integration, Monitoring, and Exploration of Methane Emissions: The Role of Data Analysis and Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-energyvis-3496","abstract":"Transmission System Operators (TSO) often need to integrate multiple sources of information to make decisions in real time. In cases where a single power line goes offline, due to a natural event or scheduled outage, there typically will be a contingency plan that the TSO may utilize to mitigate the situation. In cases where two or more power lines go offline, this contingency plan is no longer valid, and they must re-prepare and reason about the network in real time. A key network property that must be balanced is loadability--the range of permissible voltage levels for a specific bus (or node), understood as a function of power and its active (P) and reactive (Q) components. Loadability provides information of how much more demand a specific node can handle, before system became unstable. To increase loadability, the TSO can potentially make control actions that raise or lower P or Q, which results in change the voltage levels required to be within permissible limits. While many methods exist to calculate loadability and represent loadability to end users, there has been little focus on tailoring loadability visualizations to the unique needs of TSOs. In this paper we involve operations domain experts in a human centered design process to prototype two new loadability visualizations for TSOs. We contribute a design paper that yields: (1) a working model of the operator's decision making process, (2) example artifacts of the two data visualization techniques, and (3) a critical qualitative expert review of our designs.","authors":[{"affiliations":["Hitachi Energy Research, Montreal, Canada"],"email":"dmarino@cim.mcgill.ca","is_corresponding":true,"name":"David Marino"},{"affiliations":["Carleton University, Ottawa, Canada"],"email":"maxwellkeleher@cmail.carleton.ca","is_corresponding":false,"name":"Maxwell Keleher"},{"affiliations":["Hitachi Energy Research, Krakow, Poland"],"email":"krzysztof.chmielowiec@hitachienergy.com","is_corresponding":false,"name":"Krzysztof Chmielowiec"},{"affiliations":["Hitachi Energy Research, Montreal, Canada"],"email":"antony.hilliard@hitachienergy.com","is_corresponding":false,"name":"Antony Hilliard"},{"affiliations":["Hitachi Energy Research, Krakow, Poland"],"email":"pawel.dawidowski@hitachienergy.com","is_corresponding":false,"name":"Pawel Dawidowski"}],"award":"","doi":"","event_id":"w-energyvis","event_title":"EnergyVis 2024: 4th Workshop on Energy Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-energyvis-3496","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-energyvis0","session_room":"None","session_title":"EnergyVis","session_uid":"w-energyvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EnergyVis"],"time_stamp":"","title":"Operator-Centered Design of a Nodal Loadability Network Visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-energyvis-4332","abstract":"The rapid growth of the solar energy industry requires advanced educational tools to train the next generation of engineers and technicians. We present a novel system for situated visualization of photovoltaic (PV) module performance, leveraging a combination of PV simulation, sun-sky position, and head-mounted augmented reality (AR). Our system is guided by four principles of development: simplicity, adaptability, collaboration, and maintainability, realized in six components. Users interactively manipulate a physical module's orientation and shading referents with immediate feedback on the module's performance.","authors":[{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"nicholas.brunhart-lupo@nrel.gov","is_corresponding":true,"name":"Nicholas Brunhart-Lupo"},{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"kenny.gruchalla@nrel.gov","is_corresponding":false,"name":"Kenny Gruchalla"},{"affiliations":["Fort Lewis College, Durango, United States"],"email":"williams_l@fortlewis.edu","is_corresponding":false,"name":"Laurie Williams"},{"affiliations":["Fort Lewis College, Durango, United States"],"email":"selias@fortlewis.edu","is_corresponding":false,"name":"Steve Ellis"}],"award":"","doi":"","event_id":"w-energyvis","event_title":"EnergyVis 2024: 4th Workshop on Energy Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-energyvis-4332","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-energyvis0","session_room":"None","session_title":"EnergyVis","session_uid":"w-energyvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EnergyVis"],"time_stamp":"","title":"Situated Visualization of Photovoltaic Module Performance for Workforce Development","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-energyvis-6102","abstract":"This paper introduces CPIE (Coal Pollution Impact Explorer), a spatiotemporal visual analytic tool developed for interactive visualization of coal pollution impacts. CPIE visualizes electricity-generating units (EGUs) and their contributions to statewide Medicare deaths related to coal PM2.5 emissions. The tool is designed to make scientific findings on the impacts of coal pollution more accessible to the general public and to raise awareness of the associated health risks. We present three use cases for CPIE: 1) the overall spatial distribution of all 480 facilities in the United States, their statewide impact on excess deaths, and the overall decreasing trend in deaths associated with coal pollution from 1999 to 2020; 2) the influence of pollution transport, where most deaths associated with the facilities located within the same state and neighboring states but some deaths occur far away; and 3) the effectiveness of intervention regulations, such as installing emissions control devices and shutting down coal facilities, in significantly reducing the number of deaths associated with coal pollution.","authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"sjin86@gatech.edu","is_corresponding":true,"name":"Sichen Jin"},{"affiliations":["George Mason University, Fairfax, United States"],"email":"lhennem@gmu.edu","is_corresponding":false,"name":"Lucas Henneman"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"jessica.roberts@cc.gatech.edu","is_corresponding":false,"name":"Jessica Roberts"}],"award":"","doi":"","event_id":"w-energyvis","event_title":"EnergyVis 2024: 4th Workshop on Energy Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-energyvis-6102","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-energyvis0","session_room":"None","session_title":"EnergyVis","session_uid":"w-energyvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EnergyVis"],"time_stamp":"","title":"CPIE: A Spatiotemporal Visual Analytic Tool to Explore the Impact of Coal Pollution","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-energyvis-9750","abstract":"This paper presents a novel open system, ChatGrid, for easy, intuitive, and interactive geospatial visualization of large-scale transmission networks. ChatGrid uses state-of-the-art techniques for geospatial visualization of large networks, including 2.5D map views, animated flows, hierarchical and level-based filtering and aggregation to provide visual information in an easy, cognitive manner. The highlight of ChatGrid is a natural language query based interface powered by a large language model (ChatGPT) that offers a natural and flexible interactive experience whereby users can ask questions and ChatGrid provides responses both in text and visually. This paper discusses the architecture, implementation, design decisions, and usage of large language models for ChatGrid.","authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"sjin86@gatech.edu","is_corresponding":true,"name":"Sichen Jin"},{"affiliations":["Pacific Northwest National Laboratory, Richland, United States"],"email":"shrirang.abhyankar@pnnl.gov","is_corresponding":false,"name":"Shrirang Abhyankar"}],"award":"","doi":"","event_id":"w-energyvis","event_title":"EnergyVis 2024: 4th Workshop on Energy Data Visualization","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-energyvis-9750","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-energyvis0","session_room":"None","session_title":"EnergyVis","session_uid":"w-energyvis","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["EnergyVis"],"time_stamp":"","title":"ChatGrid: Power Grid Visualization Empowered by a Large Language Model","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-vis4climate-1000","abstract":"re","authors":[{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":true,"name":"Fanny Chevalier"}],"award":"","doi":"","event_id":"w-vis4climate","event_title":"Visualization for Climate Action and Sustainability","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-vis4climate-1000","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-vis4climate0","session_room":"None","session_title":"Vis4Climate","session_uid":"w-vis4climate","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Vis4Climate"],"time_stamp":"","title":"TEST - Le papier","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-vis4climate-1008","abstract":"Presenting the effects of and effective countermeasures for climate change is a significant challenge in science communication. Data-driven storytelling and narrative visualization can be part of the solution. However, the communication is limited when restricted to global or cross-regional scales, as climate effects are particular to the location and adaptions need to be local. In this work, we focus on data-driven storytelling that communicates local impacts of climate change. We analyze the adoption of data-driven storytelling by local news media in addressing climate-related topics. Further, we investigate the specific characteristics of the local scenario and present three application examples to showcase potential local data-driven stories. Since these examples are rooted in university teaching, we also discuss educational aspects. Finally, we summarize the interdisciplinary research challenges and opportunities for application associated with data-driven storytelling in a local context.","authors":[{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"fabian.beck@uni-bamberg.de","is_corresponding":true,"name":"Fabian Beck"},{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"lukas.panzer@uni-bamberg.de","is_corresponding":false,"name":"Lukas Panzer"},{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"marc.redepenning@uni-bamberg.de","is_corresponding":false,"name":"Marc Redepenning"}],"award":"","doi":"","event_id":"w-vis4climate","event_title":"Visualization for Climate Action and Sustainability","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-vis4climate-1008","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-vis4climate0","session_room":"None","session_title":"Vis4Climate","session_uid":"w-vis4climate","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Vis4Climate"],"time_stamp":"","title":"Local Climate Data Stories: Data-driven Storytelling to Communicate Effects and Mitigation of Climate Change in a Local Context","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-vis4climate-1011","abstract":"Climate change\u2019s global impact calls for coordinated visualization efforts to enhance collaboration and communication among key partners such as domain experts, community members, and policy makers. We present a collaborative initiative, EcoViz, where visualization practitioners and key partners co-designed environmental data visualizations to illustrate impacts on ecosystems and the benefit of informed management and nature-based solutions. Our three use cases rely on unique processing pipelines to represent time-dependent natural phenomena by combining cinematic, scientific, and information visualization methods. Scientific outputs are displayed through narrative data-driven animations, interactive geospatial web applications, and immersive Unreal Engine applications. Each field\u2019s decision-making process is specific, driving design decisions about the best representation and medium for each use case. Data-driven cinematic videos with simple charts and minimal annotations proved most effective for engaging large, diverse audiences. This flexible medium facilitates reuse, maintains critical details, and integrates well into broader narrative videos. The need for interdisciplinary visualizations highlights the importance of funding to integrate visualization practitioners throughout the scientific process to better translate data and knowledge into informed policy and practice.","authors":[{"affiliations":["University of California, San Diego, San Diego, United States"],"email":"jkb@ucsc.edu","is_corresponding":true,"name":"Jessica Marielle Kendall-Bar"},{"affiliations":["University of California, San Diego, La Jolla, United States"],"email":"inealey@ucsd.edu","is_corresponding":false,"name":"Isaac Nealey"},{"affiliations":["University of California, Santa Cruz, Santa Cruz, United States"],"email":"icostell@ucsc.edu","is_corresponding":false,"name":"Ian Costello"},{"affiliations":["University of California, Santa Cruz, Santa Cruz, United States"],"email":"chlowrie@ucsc.edu","is_corresponding":false,"name":"Christopher Lowrie"},{"affiliations":["University of California, San Diego, San Diego, United States"],"email":"khn009@ucsd.edu","is_corresponding":false,"name":"Kevin Huynh Nguyen"},{"affiliations":["University of California San Diego, La Jolla, United States"],"email":"pponganis@ucsd.edu","is_corresponding":false,"name":"Paul J. Ponganis"},{"affiliations":["University of California, Santa Cruz, Santa Cruz, United States"],"email":"mwbeck@ucsc.edu","is_corresponding":false,"name":"Michael W. Beck"},{"affiliations":["University of California, San Diego, San Diego, United States"],"email":"ialtintas@ucsd.edu","is_corresponding":false,"name":"\u0130lkay Alt\u0131nta\u015f"}],"award":"","doi":"","event_id":"w-vis4climate","event_title":"Visualization for Climate Action and Sustainability","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-vis4climate-1011","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-vis4climate0","session_room":"None","session_title":"Vis4Climate","session_uid":"w-vis4climate","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Vis4Climate"],"time_stamp":"","title":"EcoViz: an iterative methodology for designing multifaceted data-driven environmental visualizations that communicate ecosystem impacts and envision nature-based solutions","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-vis4climate-1018","abstract":"Household consumption significantly impacts climate change. Yet designing interventions to encourage consumption reduction that are tailored to each home's needs remains challenging. To address this, we developed Eco-Garden, a data sculpture designed to visualise household consumption aiming to promote sustainable practices. Eco-Garden serves as both an aesthetic piece for visitors and a functional tool for household members to understand their resource consumption. In this paper, we present the human-centred design process of Eco-Garden and the preliminary findings we made through the field study. We conducted a field study with 15 households to explore participants' experience with Eco-Garden and its potential to encourage sustainable practices at home. Our participants provided positive feedback on integrating Eco-Garden into their homes, highlighting considerations such as aesthetics, physicality, calm manner of presenting consumption data. Our Insights contribute to developing data sculptures for households that can facilitate meaningful interactions with consumption data.","authors":[{"affiliations":["Cardiff University, UK, Cardiff, United Kingdom"],"email":"pereraud@cardiff.ac.uk","is_corresponding":true,"name":"Dushani Ushettige"},{"affiliations":["Cardiff University, Cardiff, United Kingdom"],"email":"verdezotodiasn@cardiff.ac.uk","is_corresponding":false,"name":"Nervo Verdezoto"},{"affiliations":["Cardiff University, Cardiff, United Kingdom"],"email":"lannon@cardiff.ac.uk","is_corresponding":false,"name":"Simon Lannon"},{"affiliations":["Cardiff Universiy, Cardiff, United Kingdom"],"email":"gwilliamja@cardiff.ac.uk","is_corresponding":false,"name":"Jullie Gwilliam"},{"affiliations":["Cardiff University, Cardiff, United Kingdom"],"email":"eslambolchilarp@cardiff.ac.uk","is_corresponding":false,"name":"Parisa Eslambolchilar"}],"award":"","doi":"","event_id":"w-vis4climate","event_title":"Visualization for Climate Action and Sustainability","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-vis4climate-1018","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-vis4climate0","session_room":"None","session_title":"Vis4Climate","session_uid":"w-vis4climate","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Vis4Climate"],"time_stamp":"","title":"Eco-Garden: A Data Sculpture to Encourage Sustainable Practices in Everyday Life in Households","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-vis4climate-1023","abstract":"Consumers have the potential to play a large role in mitigating the climate crisis by taking on more pro-environmental behavior, for example by making more sustainable food choices. However, while environmental awareness is common among consumers, it is not always clear what the current impact of one's own food choices are, and consequently it is not always clear how or why their own behavior must change, or how important the change is. Immersive technologies have been shown to aid in these aspects. In this paper, we bring food production into the home by means of handheld augmented reality. Using the current prototype, users can input which ingredients are in their meal on their smartphone, and after making a 3D scan of their kitchen, plants, livestock, feed, and water required for all are visualized in front of them. In this paper, we describe the design of the current prototype and, by analyzing the current state of research on virtual and augmented reality for sustainability research, we describe in which ways the application could be extended in terms of data, models, and interaction, to investigate the most prominent issues within environmental sustainability communications research.","authors":[{"affiliations":["Wageningen University and Research, Wageningen, Netherlands"],"email":"nina.rosa-dejong@wur.nl","is_corresponding":true,"name":"Nina Rosa"}],"award":"","doi":"","event_id":"w-vis4climate","event_title":"Visualization for Climate Action and Sustainability","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-vis4climate-1023","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-vis4climate0","session_room":"None","session_title":"Vis4Climate","session_uid":"w-vis4climate","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Vis4Climate"],"time_stamp":"","title":"AwARe: Using handheld augmented reality for researching the potential of food resource information visualization","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-vis4climate-1024","abstract":"This paper details the development and implementation of a collaborative exhibit at Boston\u2019s Museum of Science showcasing interactive data visualizations designed to educate the public on global sustainability and urban environmental concerns. Supported by cross-institutional collaboration, the exhibit provided a rich real-world learning opportunity for students, resulting in a set of public-facing educational resources that informed visitors of global sustainability concerns through the lens of a local municipality. The realization of this project was made possible only by a close collaboration between a municipality, science museum and academic partners, all who committed their expertise and resources at both leadership and implementation team levels.This initiative highlights the value of cross-institutional collaboration to ignite the transformative potential of interactive visualizations in driving public engagement of local and global sustainability issues. Focusing on promoting sustainability and enhancing community well-being, this initiative highlights the potential of cross-institutional collaboration and locally-relevant interactive data visualizations to educate, inspire action, and foster community engagement in addressing climate change and urban sustainability.","authors":[{"affiliations":["Brown University, Providence, United States","Rhode Island School of Design, Providence, United States"],"email":"bae@brown.edu","is_corresponding":true,"name":"Beth Altringer Eagle"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"sylvan@media.mit.edu","is_corresponding":false,"name":"Elisabeth Sylvan"}],"award":"","doi":"","event_id":"w-vis4climate","event_title":"Visualization for Climate Action and Sustainability","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-vis4climate-1024","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-vis4climate0","session_room":"None","session_title":"Vis4Climate","session_uid":"w-vis4climate","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Vis4Climate"],"time_stamp":"","title":"Cultivating Climate Action Through Multi-Institutional Collaboration: Innovative Data Visualization Educational Programs and Exhibits for Public Engagement","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-future-1007","abstract":"Data physicalizations are a time-tested practice for visualizing data, but the sustainability challenges of current physicalization practices have only recently been explored; for example, the usage of carbon-intensive, non-renewable materials like plastic and metal. This work explores clay physicalizations as an approach to these challenges. Using a three-stage process, we investigate the design and sustainability of clay 3D printed physicalizations: 1) exploring the properties and constraints of clay when extruded through a 3D printer, 2) testing a variety of data encodings that work within the constraints, and 3) introducing Rain Gauge, a clay physicalization exploring climate effects on climate data with an impermanent material. Throughout our process, we investigate the material circularity of clay-based digital fabrication by reclaiming and reusing the clay stock in each stage. Finally, we reflect on the implications of ceramic 3D printing for data physicalization through the lenses of practicality and sustainability.","authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"bridger.g.herman@gmail.com","is_corresponding":true,"name":"Bridger Herman"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"jlrossi@umn.edu","is_corresponding":false,"name":"Jessica Rossi-Mastracci"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"will1070@umn.edu","is_corresponding":false,"name":"Heather Willy"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"mreicher@umn.edu","is_corresponding":false,"name":"Molly Reichert"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"dfk@umn.edu","is_corresponding":false,"name":"Daniel F. Keefe"}],"award":"","doi":"","event_id":"w-future","event_title":"VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-future-1007","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-future0","session_room":"None","session_title":"VISions of the Future","session_uid":"w-future","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["VISions of the Future"],"time_stamp":"","title":"Rain Gauge: Exploring the Design and Sustainability of 3D Printed Clay Physicalizations","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-future-1008","abstract":"We explain our model of data-in-a-void and contrast it with the idea of data-voids to explore how the different framings impact our thinking on sustainability. This contrast supports our assertion that how we think about the data that we work with for visualization design impacts the direction of our thinking and our work. To show this we describe how we view the concept of data-in-a-void as different from that of data-voids. Then we provide two examples, one that relates to existing data about bicycle mobility, and one about non-data for local food production. In the discussion, we then untangle and outline how our thinking about data for sustainability is impacted and influenced by the data-in-a-void model.","authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"karly.ross@ucalgary.ca","is_corresponding":true,"name":"Karly Ross"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"pratim.sengupta@ucalgary.ca","is_corresponding":false,"name":"Pratim Sengupta"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"wj@wjwillett.net","is_corresponding":false,"name":"Wesley Willett"}],"award":"","doi":"","event_id":"w-future","event_title":"VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-future-1008","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-future0","session_room":"None","session_title":"VISions of the Future","session_uid":"w-future","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["VISions of the Future"],"time_stamp":"","title":"(Almost) All Data is Absent Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-future-1011","abstract":"This study explores energy issues across various nations, focusing on sustainable energy availability and accessibility. Representatives from all continents were selected based on their HDI values. Data from Kaggle, spanning 2000-2020, was analyzed using Python to address questions on electricity access, renewable energy generation, and fossil fuel consumption. The research employed statistical and data visualization techniques to reveal trends and disparities. Findings underscore the importance of Python and Kaggle in data analysis. The study suggests expanding datasets and incorporating predictive modeling for future research to enhance understanding and decision-making in energy policies.","authors":[{"affiliations":["Faculdade Nova Roma, Recife, Brazil"],"email":"gustavodssilva456@gmail.com","is_corresponding":true,"name":"Gustavo Santos Silva"},{"affiliations":["Faculdade Nova Roma, Recife, Brazil"],"email":"lartur671@gmail.com","is_corresponding":false,"name":"Artur Vin\u00edcius Lima Silva"},{"affiliations":["Faculdade Nova Roma, Recife, Brazil"],"email":"lpsouza612@gmail.com","is_corresponding":false,"name":"Lucas Pereira Souza"},{"affiliations":["Faculdade Nova Roma, Recife, Brazil"],"email":"adrianlauzid@gmail.com","is_corresponding":false,"name":"Adrian Lauzid"},{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"djmm@cin.ufpe.br","is_corresponding":false,"name":"Davi Maia"}],"award":"","doi":"","event_id":"w-future","event_title":"VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-future-1011","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-future0","session_room":"None","session_title":"VISions of the Future","session_uid":"w-future","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["VISions of the Future"],"time_stamp":"","title":"Renewable Energy Data Visualization: A study with Open Data","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-future-1012","abstract":"Information visualization holds significant potential to support sustainability goals such as environmental stewardship, and climate resilience by transforming complex data into accessible visual formats that enhance public understanding of complex climate change data and drive actionable insights. While the field has predominantly focused on analytical orientation of visualization, challenging traditional visualization techniques and goals, through ``critical visualization'' research expands existing assumptions and conventions in the field. In this paper, I explore how reimagining overlooked aspects of data visualization\u2014such as engagement, emotional resonance, communication, and community empowerment\u2014can contribute to achieving sustainability objectives. I argue that by focusing on inclusive data visualization that promotes clarity, understandability, and public participation, we can make complex data more relatable and actionable, fostering broader connections and mobilizing collective action on critical issues like climate change. Moreover, I discuss the role of emotional receptivity in environmental data communication, stressing the need for visualizations that respect diverse cultural perspectives and emotional responses to achieve impactful outcomes. Drawing on insights from a decade of research in public participation and community engagement, I aim to highlight how data visualization can democratize data access and increase public involvement in order to contribute to a more sustainable and resilient future.","authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"nmahyar@cs.umass.edu","is_corresponding":true,"name":"Narges Mahyar"}],"award":"","doi":"","event_id":"w-future","event_title":"VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-future-1012","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-future0","session_room":"None","session_title":"VISions of the Future","session_uid":"w-future","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["VISions of the Future"],"time_stamp":"","title":"Reimagining Data Visualization to Address Sustainability Goals","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-future-1013","abstract":"This position paper discusses the role of data visualizations in journalism based on new areas of study such as visual journalism and data journalism, using examples from the coverage of the catastrophe that occurred in 2024 in Rio Grande do Sul, Brazil, affecting over 2 million people. This case served as a warning to the country about the importance of the climate change agenda and its consequences. The paper includes a literature review in the fields of journalism, data visualization, and psychology to explore the importance of data visualization in combating misinformation and in producing more reliable journalism as tool for fighting climate change","authors":[{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"emilly.brito@ufpe.br","is_corresponding":true,"name":"Emilly Brito"},{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"nivan@cin.ufpe.br","is_corresponding":false,"name":"Nivan Ferreira"}],"award":"","doi":"","event_id":"w-future","event_title":"VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-future-1013","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-future0","session_room":"None","session_title":"VISions of the Future","session_uid":"w-future","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["VISions of the Future"],"time_stamp":"","title":"Visual and Data Journalism as Tools for Fighting Climate Change","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-storygenai-5237","abstract":"Communicating data insights in an accessible and engaging manner to a broader audience remains a significant challenge. To address this problem, we introduce the Emoji Encoder, a tool that generates a set of emoji recommendations for the field and category names appearing in a tabular dataset. The selected set of emoji encodings can be used to generate configurable unit charts that combine plain text and emojis as word-scale graphics. These charts can serve to contrast values across multiple quantitative fields for each row in the data or to communicate trends over time. Any resulting chart is simply a block of text characters, meaning that it can be directly copied into a text message or posted on a communication platform such as Slack or Teams. This work represents a step toward our larger goal of developing novel, fun, and succinct data storytelling experiences that engage those who do not identify as data analysts. Emoji-based unit charts can offer contextual cues related to the data at the center of a conversation on platforms where emoji-rich communication is typical.","authors":[{"affiliations":["University of Waterloo, Waterloo, Canada","Tableau Research, Seattle, United States"],"email":"mbrehmer@uwaterloo.ca","is_corresponding":true,"name":"Matthew Brehmer"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"},{"affiliations":["McGraw Hill, Seattle, United States","Tableau Software, Seattle, United States"],"email":"zoezoezoe.cc@gmail.com","is_corresponding":false,"name":"Zoe Zoe"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"}],"award":"","doi":"","event_id":"w-storygenai","event_title":"Workshop on Data Storytelling in an Era of Generative AI","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-storygenai-5237","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-storygenai0","session_room":"None","session_title":"Data Story GenAI","session_uid":"w-storygenai","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Data Story GenAI"],"time_stamp":"","title":"The Data-Wink Ratio: Emoji Encoder for Generating Semantically-Resonant Unit Charts","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-storygenai-6168","abstract":"Data-driven storytelling serves as a crucial bridge for communicating ideas in a persuasive way. However, the manual creation of data stories is a multifaceted, labor-intensive, and case-specific effort, limiting their broader application. As a result, automating the creation of data stories has emerged as a significant research thrust. Despite advances in Artificial Intelligence, the systematic generation of data stories remains challenging due to their hybrid nature: they must frame a perspective based on a seed idea in a top-down manner, similar to traditional storytelling, while coherently grounding insights of given evidence in a bottom-up fashion, akin to data analysis. These dual requirements necessitate precise constraints on the permissible space of a data story. In this viewpoint, we propose integrating constraints into the data story generation process. Defined upon the hierarchies of interpretation and articulation, constraints shape both narrations and illustrations to align with seed ideas and contextualized evidence. We identify the taxonomy and required functionalities of these constraints. Although constraints can be heterogeneous and latent, we explore the potential to represent them in a computation-friendly fashion via Domain-Specific Languages. We believe that leveraging constraints will balance the artistic and engineering aspects of data story generation.","authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"yu.zhe.s.shi@gmail.com","is_corresponding":true,"name":"Yu-Zhe Shi"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"haotian.li@connect.ust.hk","is_corresponding":false,"name":"Haotian Li"},{"affiliations":["Peking University, Beijing, China"],"email":"ruanlecheng@whai.pku.edu.cn","is_corresponding":false,"name":"Lecheng Ruan"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"award":"","doi":"","event_id":"w-storygenai","event_title":"Workshop on Data Storytelling in an Era of Generative AI","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-storygenai-6168","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-storygenai0","session_room":"None","session_title":"Data Story GenAI","session_uid":"w-storygenai","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Data Story GenAI"],"time_stamp":"","title":"Constraint representation towards precise data-driven storytelling","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-storygenai-7043","abstract":"Creating data stories from raw data is challenging due to humans\u2019 limited attention spans and the need for specialized skills. Recent advancements in large language models (LLMs) offer great opportunities to develop systems with autonomous agents to streamline the data storytelling workflow. Though multi-agent systems have benefits such as fully realizing LLM potentials with decomposed tasks for individual agents, designing such systems also faces challenges in task decomposition, performance optimization for sub-tasks, and workflow design. To better understand these issues, we develop Data Director, an LLM-based multi-agent system designed to automate the creation of animated data videos, a representative genre of data stories. Data Director interprets raw data, breaks down tasks, designs agent roles to make informed decisions automatically, and seamlessly integrates diverse components of data videos. A case study demonstrates Data Director\u2019s effectiveness in generating data videos. Throughout development, we have derived lessons learned from addressing challenges, guiding further advancements in autonomous agents for data storytelling. We also shed light on future directions for global optimization, human-in-the-loop design, and the application of advanced multi-modal LLMs.","authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"lshenaj@connect.ust.hk","is_corresponding":true,"name":"Leixian Shen"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"haotian.li@connect.ust.hk","is_corresponding":false,"name":"Haotian Li"},{"affiliations":["Microsoft, Beijing, China"],"email":"yunvvang@gmail.com","is_corresponding":false,"name":"Yun Wang"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"award":"","doi":"","event_id":"w-storygenai","event_title":"Workshop on Data Storytelling in an Era of Generative AI","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-storygenai-7043","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-storygenai0","session_room":"None","session_title":"Data Story GenAI","session_uid":"w-storygenai","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Data Story GenAI"],"time_stamp":"","title":"From Data to Story: Towards Automatic Animated Data Video Creation with LLM-based Multi-Agent Systems","youtube_ff_id":null,"youtube_ff_url":null},{"UID":"w-storygenai-7072","abstract":"Crafting accurate and insightful narratives from data visualization is essential in data storytelling. Like creative writing, where one reads to write a story, data professionals must effectively ``read\" visualizations to create compelling data stories. In education, helping students develop these skills can be achieved through exercises that ask them to create narratives from data plots, demonstrating both ``show\" (describing the plot) and ``tell\" (interpreting the plot). Providing formative feedback on these exercises is crucial but challenging in large-scale educational settings with limited resources. This study explores using GPT-4o, a multimodal LLM, to generate and evaluate narratives from data plots. The LLM was tested in zero-shot, one-shot, and two-shot scenarios, generating narratives and self-evaluating their depth. Human experts also assessed the LLM's outputs. Additionally, the study developed machine learning and LLM-based models to assess student-generated narratives using LLM-generated data. Human experts validated a subset of these machine assessments. The findings highlight the potential of LLMs to support scalable formative assessment in teaching data storytelling skills, which has important implications for AI-supported educational interventions.","authors":[{"affiliations":["University of Maryland Baltimore County, Baltimore, United States"],"email":"narens1@umbc.edu","is_corresponding":true,"name":"Naren Sivakumar"},{"affiliations":["University of Maryland, Baltimore County, Baltimore, United States"],"email":"lujiec@umbc.edu","is_corresponding":false,"name":"Lujie Karen Chen"},{"affiliations":["University of Maryland,Baltimore County, Baltimore, United States"],"email":"io11937@umbc.edu","is_corresponding":false,"name":"Pravalika Papasani"},{"affiliations":["University of maryland baltimore county, Hanover, United States"],"email":"vignam1@umbc.edu","is_corresponding":false,"name":"Vigna Majmundar"},{"affiliations":["Towson University, Towson, United States"],"email":"jfeng@towson.edu","is_corresponding":false,"name":"Jinjuan Heidi Feng"},{"affiliations":["SRI International, Menlo Park, United States"],"email":"louise.yarnall@sri.com","is_corresponding":false,"name":"Louise Yarnall"},{"affiliations":["University of Alabama, Tuscaloosa, United States"],"email":"jgong@umbc.edu","is_corresponding":false,"name":"Jiaqi Gong"}],"award":"","doi":"","event_id":"w-storygenai","event_title":"Workshop on Data Storytelling in an Era of Generative AI","external_paper_link":"","fno":"","has_fno":false,"has_image":false,"has_pdf":false,"id":"w-storygenai-7072","image_caption":"","keywords":[],"paper_type":"workshop","paper_type_color":"#f4a261","paper_type_name":"Workshop","prerecorded_video_id":null,"prerecorded_video_link":null,"session_bunny_ff_link":"","session_bunny_ff_subtitles":"","session_bunny_prerecorded_link":"","session_bunny_prerecorded_subtitles":"","session_id":"w-storygenai0","session_room":"None","session_title":"Data Story GenAI","session_uid":"w-storygenai","session_youtube_ff_id":"","session_youtube_ff_link":"","session_youtube_prerecorded_id":"","session_youtube_prerecorded_link":"","sessions":["Data Story GenAI"],"time_stamp":"","title":"Show and Tell: Exploring Large Language Model\u2019s Potential in Formative Educational Assessment of Data Stories","youtube_ff_id":null,"youtube_ff_url":null}] diff --git a/program/serve_paper_list.json b/program/serve_paper_list.json index ce59bd13c..1eda4bd63 100644 --- a/program/serve_paper_list.json +++ b/program/serve_paper_list.json @@ -1 +1 @@ -{"v-cga-10078374":{"abstract":"Existing dynamic weighted graph visualization approaches rely on users\u2019 mental comparison to perceive temporal evolution of dynamic weighted graphs, hindering users from effectively analyzing changes across multiple timeslices. We propose DiffSeer, a novel approach for dynamic weighted graph visualization by explicitly visualizing the differences of graph structures (e.g., edge weight differences) between adjacent timeslices. Specifically, we present a novel nested matrix design that overviews the graph structure differences over a time period as well as shows graph structure details in the timeslices of user interest. By collectively considering the overall temporal evolution and structure details in each timeslice, an optimization-based node reordering strategy is developed to group nodes with similar evolution patterns and highlight interesting graph structure details in each timeslice. We conducted two case studies on real-world graph datasets and in-depth interviews with 12 target users to evaluate DiffSeer. The results demonstrate its effectiveness in visualizing dynamic weighted graphs.","accessible_pdf":false,"authors":[{"affiliations":"","email":"wenxiaolin@stu.scu.edu.cn","is_corresponding":false,"name":"Xiaolin Wen"},{"affiliations":"","email":"yongwang@smu.edu.sg","is_corresponding":true,"name":"Yong Wang"},{"affiliations":"","email":"wumeixuan@stu.scu.edu.cn","is_corresponding":false,"name":"Meixuan Wu"},{"affiliations":"","email":"wangfengjie@stu.scu.edu.cn","is_corresponding":false,"name":"Fengjie Wang"},{"affiliations":"","email":"xuanwu.yue@connect.ust.hk","is_corresponding":false,"name":"Xuanwu Yue"},{"affiliations":"","email":"shenqm@sustech.edu.cn","is_corresponding":false,"name":"Qiaomu Shen"},{"affiliations":"","email":"mayx@sustech.edu.cn","is_corresponding":false,"name":"Yuxin Ma"},{"affiliations":"","email":"zhumin@scu.edu.cn","is_corresponding":false,"name":"Min Zhu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yong Wang"],"doi":"10.1109/MCG.2023.3248289","external_paper_link":"","fno":"10078374","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visibility Graph, Spatial Patterns, Weight Change, In-depth Interviews, Temporal Changes, Temporal Evolution, Negative Changes, Interesting Patterns, Edge Weights, Real-world Datasets, Graph Structure, Visual Approach, Dynamic Visualization, Dynamic Graph, Financial Networks, Graph Datasets, Similar Evolutionary Patterns, User Interviews, Similar Changes, Chinese New Year, Sector Indices, Original Graph, Red Rectangle, Nodes In Order, Stock Market Crash, Stacked Bar Charts, Different Types Of Matrices, Chinese New, Blue Rectangle"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10078374","time_end":"","time_stamp":"","time_start":"","title":"DiffSeer: Difference-Based Dynamic Weighted Graph Visualization","uid":"v-cga-10078374","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10091124":{"abstract":"The Internet of Food (IoF) is an emerging field in smart foodsheds, involving the creation of a knowledge graph (KG) about the environment, agriculture, food, diet, and health. However, the heterogeneity and size of the KG present challenges for downstream tasks, such as information retrieval and interactive exploration. To address those challenges, we propose an interactive knowledge and learning environment (IKLE) that integrates three programming and modeling languages to support multiple downstream tasks in the analysis pipeline. To make IKLE easier to use, we have developed algorithms to automate the generation of each language. In addition, we collaborated with domain experts to design and develop a dataflow visualization system, which embeds the automatic language generations into components and allows users to build their analysis pipeline by dragging and connecting components of interest. We have demonstrated the effectiveness of IKLE through three real-world case studies in smart foodsheds.","accessible_pdf":false,"authors":[{"affiliations":"","email":"tu.253@osu.edu","is_corresponding":true,"name":"Yamei Tu"},{"affiliations":"","email":"wang.5502@osu.edu","is_corresponding":false,"name":"Xiaoqi Wang"},{"affiliations":"","email":"qiu.580@osu.edu","is_corresponding":false,"name":"Rui Qiu"},{"affiliations":"","email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"},{"affiliations":"","email":"mmmille6@wisc.edu","is_corresponding":false,"name":"Michelle Miller"},{"affiliations":"","email":"jinmeng.rao@wisc.edu","is_corresponding":false,"name":"Jinmeng Rao"},{"affiliations":"","email":"song.gao@wisc.edu","is_corresponding":false,"name":"Song Gao"},{"affiliations":"","email":"prhuber@ucdavis.edu","is_corresponding":false,"name":"Patrick R. Huber"},{"affiliations":"","email":"adhollander@ucdavis.edu","is_corresponding":false,"name":"Allan D. Hollander"},{"affiliations":"","email":"matthew@ic-foods.org","is_corresponding":false,"name":"Matthew Lange"},{"affiliations":"","email":"cgarcia@tacc.utexas.edu","is_corresponding":false,"name":"Christian R. Garcia"},{"affiliations":"","email":"jstubbs@tacc.utexas.edu","is_corresponding":false,"name":"Joe Stubbs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yamei Tu"],"doi":"10.1109/MCG.2023.3263960","external_paper_link":"","fno":"10091124","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Learning Environment, Interactive Learning Environments, Programming Language, Visual System, Analysis Pipeline, Patterns In Data, Flow Data, Human-computer Interaction, Food Systems, Information Retrieval, Domain Experts, Language Model, Automatic Generation, Interactive Exploration, Cyberinfrastructure, Pre-trained Language Models, Resource Description Framework, SPARQL Query, DBpedia, Entity Types, Data Visualization, Resilience Analysis, Load Data, Query Results, Supply Chain, Network Flow"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10091124","time_end":"","time_stamp":"","time_start":"","title":"An Interactive Knowledge and Learning Environment in Smart Foodsheds","uid":"v-cga-10091124","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10128890":{"abstract":"Some 15 years ago, Visualization Viewpoints published an influential article titled Rainbow Color Map (Still) Considered Harmful (Borland and Taylor, 2007). The paper argued that the \u201crainbow colormap\u2019s characteristics of confusing the viewer, obscuring the data and actively misleading interpretation make it a poor choice for visualization.\u201d Subsequent articles often repeat and extend these arguments, so much so that avoiding rainbow colormaps, along with their derivatives, has become dogma in the visualization community. Despite this loud and persistent recommendation, scientists continue to use rainbow colormaps. Have we failed to communicate our message, or do rainbow colormaps offer advantages that have not been fully appreciated? We argue that rainbow colormaps have properties that are underappreciated by existing design conventions. We explore key critiques of the rainbow in the context of recent research to understand where and how rainbows might be misunderstood. Choosing a colormap is a complex task, and rainbow colormaps can be useful for selected applications.","accessible_pdf":false,"authors":[{"affiliations":"","email":"cware@ccom.unh.edu","is_corresponding":false,"name":"Colin Ware"},{"affiliations":"","email":"mstone@acm.org","is_corresponding":true,"name":"Maureen Stone"},{"affiliations":"","email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maureen Stone"],"doi":"10.1109/MCG.2023.3246111","external_paper_link":"","fno":"10128890","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Image Color Analysis, Semantics, Data Visualization, Estimation, Reliability Engineering"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10128890","time_end":"","time_stamp":"","time_start":"","title":"Rainbow Colormaps Are Not All Bad","uid":"v-cga-10128890","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10198358":{"abstract":"Set visualization facilitates the exploration and analysis of set-type data. However, how sets should be visualized when the data are uncertain is still an open research challenge. To address the problem of depicting uncertainty in set visualization, we ask 1) which aspects of set type data can be affected by uncertainty and 2) which characteristics of uncertainty influence the visualization design. We answer these research questions by first describing a conceptual framework that brings together 1) the information that is primarily relevant in sets (i.e., set membership, set attributes, and element attributes) and 2) different plausible categories of (un)certainty (i.e., certainty, undefined uncertainty as a binary fact, and defined uncertainty as quantifiable measure). Following the structure of our framework, we systematically discuss basic visualization examples of integrating uncertainty in set visualizations. We draw on existing knowledge about general uncertainty visualization and previous evidence of its effectiveness.","accessible_pdf":false,"authors":[{"affiliations":"","email":"christian.tominski@uni-rostock.de","is_corresponding":false,"name":"Christian Tominski"},{"affiliations":"","email":"m.behrisch@uu.nl","is_corresponding":true,"name":"Michael Behrisch"},{"affiliations":"","email":"susanne.bleisch@fhnw.ch","is_corresponding":false,"name":"Susanne Bleisch"},{"affiliations":"","email":"sara.fabrikant@geo.uzh.ch","is_corresponding":false,"name":"Sara Irina Fabrikant"},{"affiliations":"","email":"eva.mayr@donau-uni.ac.at","is_corresponding":false,"name":"Eva Mayr"},{"affiliations":"","email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":"","email":"helen.purchase@monash.edu","is_corresponding":false,"name":"Helen Purchase"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Behrisch"],"doi":"10.1109/MCG.2023.3300441","external_paper_link":"","fno":"10198358","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Uncertainty, Data Visualization, Measurement Uncertainty, Visual Analytics, Terminology, Task Analysis, Surveys, Conceptual Framework, Cardinality, Data Visualization, Visual Representation, Measure Of The Amount, Set Membership, Intersection Set, Visual Design, Different Types Of Uncertainty, Missing Values, Visual Methods, Fuzzy Set, Age Of Students, Color Values, Uncertainty Values, Explicit Representation, Aggregate Value, Exact Information, Uncertain Information, Table Cells, Temporal Uncertainty, Uncertain Data, Representation Of Uncertainty, Implicit Representation, Spatial Uncertainty, Point Symbol, Visual Clutter, Color Hue, Graphical Elements, Uncertain Value"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10198358","time_end":"","time_stamp":"","time_start":"","title":"Visualizing Uncertainty in Sets","uid":"v-cga-10198358","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10201383":{"abstract":"Although visualizations are a useful tool for helping people to understand information, they can also have unintended effects on human cognition. This is especially true for uncertain information, which is difficult for people to understand. Prior work has found that different methods of visualizing uncertain information can produce different patterns of decision making from users. However, uncertainty can also be represented via text or numerical information, and few studies have systematically compared these types of representations to visualizations of uncertainty. We present two experiments that compared visual representations of risk (icon arrays) to numerical representations (natural frequencies) in a wildfire evacuation task. Like prior studies, we found that different types of visual cues led to different patterns of decision making. In addition, our comparison of visual and numerical representations of risk found that people were more likely to evacuate when they saw visualizations than when they saw numerical representations. These experiments reinforce the idea that design choices are not neutral: seemingly minor differences in how information is represented can have important impacts on human risk perception and decision making.","accessible_pdf":false,"authors":[{"affiliations":"","email":"lematze@sandia.gov","is_corresponding":true,"name":"Laura E. Matzen"},{"affiliations":"","email":"bchowel@sandia.gov","is_corresponding":false,"name":"Breannan C. Howell"},{"affiliations":"","email":"mctrumb@sandia.gov","is_corresponding":false,"name":"Michael C. S. Trumbo"},{"affiliations":"","email":"kmdivis@sandia.gov","is_corresponding":false,"name":"Kristin M. Divis"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Laura E. Matzen"],"doi":"10.1109/MCG.2023.3299875","external_paper_link":"","fno":"10201383","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, Uncertainty, Decision Making, Costs, Task Analysis, Laboratories, Information Analysis, Decision Making, Visual Representation, Numerical Representation, Decision Patterns, Deterministic, Risk Perception, Specific Information, Fundamental Frequency, Point Values, Representation Of Information, Risk Information, Visual Conditions, Numerous Conditions, Human Decision, Numerical Information, Impact Of Different Types, Uncertain Information, Type Of Visualization, Differences In Risk Perception, Representation Of Uncertainty, Increase In Participation, Participants In Experiment, Individual Difference Measures, Sandia National Laboratories, Risk Propensity, Bonus Payments, Average Response Time, Difference In Probability, Response Time"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10201383","time_end":"","time_stamp":"","time_start":"","title":"Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making","uid":"v-cga-10201383","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10207831":{"abstract":"The membership function is to categorize quantities along with a confidence degree. This article investigates a generic user interaction based on this function for categorizing various types of quantities without modification, which empowers users to articulate uncertainty categorization and enhance their visual data analysis significantly. We present the technique design and an online prototype, supplementing with insights from three case studies that highlight the technique\u2019s efficacy among different types of quantities. Furthermore, we conduct a formal user study to scrutinize the process and reasoning users employ while utilizing our technique. The findings indicate that our technique can help users create customized categories. Both our code and the interactive prototype are made available as open-source resources, intended for application across varied domains as a generic tool.","accessible_pdf":false,"authors":[{"affiliations":"","email":"liuliqun.cs@gmail.com","is_corresponding":true,"name":"Liqun Liu"},{"affiliations":"","email":"romain.vuillemot@ec-lyon.fr","is_corresponding":false,"name":"Romain Vuillemot"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Liqun Liu"],"doi":"10.1109/MCG.2023.3301449","external_paper_link":"","fno":"10207831","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Uncertainty, Prototypes, Fuzzy Logic, Image Color Analysis, Fuzzy Sets, Open Source Software, General Function, Membership Function, User Study, Classification Process, Fuzzy Logic, Quantitative Values, Visualization Techniques, Amount Of Type, Fuzzy Theory, General Interaction, Temperature Dataset, Interaction Techniques, Carbon Dioxide, Computation Time, Rule Based, Web Page, Real World Scenarios, Fuzzy Set, Domain Experts, Supercritical CO 2, Parallel Coordinates, Fuzzy System, Fuzzy Clustering, Interactive Visualization, Amount Of Items, Large Scale Problems"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10207831","time_end":"","time_stamp":"","time_start":"","title":"A Generic Interactive Membership Function for Categorization of Quantities","uid":"v-cga-10207831","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10227838":{"abstract":"We report a study investigating the viability of using interactive visualizations to aid architectural design with building codes. While visualizations have been used to support general architectural design exploration, existing computational solutions treat building codes as separate from, rather than part of, the design process, creating challenges for architects. Through a series of participatory design studies with professional architects, we found that interactive visualizations have promising potential to aid design exploration and sensemaking in early stages of architectural design by providing feedback about potential allowances and consequences of design decisions. However, implementing a visualization system necessitates addressing the complexity and ambiguity inherent in building codes. To tackle these challenges, we propose various user-driven knowledge management mechanisms for integrating, negotiating, interpreting, and documenting building code rules.","accessible_pdf":false,"authors":[{"affiliations":"","email":"snowak@sfu.ca","is_corresponding":true,"name":"Stan Nowak"},{"affiliations":"","email":"bon.aseniero@autodesk.com","is_corresponding":false,"name":"Bon Adriel Aseniero"},{"affiliations":"","email":"lyn@sfu.ca","is_corresponding":false,"name":"Lyn Bartram"},{"affiliations":"","email":"tovi@dgp.toronto.edu","is_corresponding":false,"name":"Tovi Grossman"},{"affiliations":"","email":"George.fitzmaurice@autodesk.com","is_corresponding":false,"name":"George Fitzmaurice"},{"affiliations":"","email":"justin.matejka@autodesk.com","is_corresponding":false,"name":"Justin Matejka"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Stan Nowak"],"doi":"10.1109/MCG.2023.3307971","external_paper_link":"","fno":"10227838","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10227838","time_end":"","time_stamp":"","time_start":"","title":"Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes","uid":"v-cga-10227838","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10414267":{"abstract":"Traditional approaches to data visualization have often focused on comparing different subsets of data, and this is reflected in the many techniques developed and evaluated over the years for visual comparison. Similarly, common workflows for exploratory visualization are built upon the idea of users interactively applying various filter and grouping mechanisms in search of new insights. This paradigm has proven effective at helping users identify correlations between variables that can inform thinking and decision-making. However, recent studies show that consumers of visualizations often draw causal conclusions even when not supported by the data. Motivated by these observations, this article highlights recent advances from a growing community of researchers exploring methods that aim to directly support visual causal inference. However, many of these approaches have their own limitations, which limit their use in many real-world scenarios. This article, therefore, also outlines a set of key open challenges and corresponding priorities for new research to advance the state of the art in visual causal inference.","accessible_pdf":false,"authors":[{"affiliations":"","email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":"","email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":"","email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arran Zeyu Wan"],"doi":"10.1109/MCG.2023.3338788","external_paper_link":"","fno":"10414267","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Analytical Models, Correlation, Visual Analytics, Decision Making, Data Visualization, Reliability Theory, Cognition, Inference Algorithms, Causal Inference, Causality, Social Media, Exploratory Analysis, Data Visualization, Visual Representation, Visual Analysis, Visualization Tool, Open Challenges, Interactive Visualization, Assembly Line, Different Subsets Of Data, Visual Analytics Tool, Data Driven Decision Making, Data Quality, Statistical Models, Causal Effect, Visual System, Use Of Social Media, Bar Charts, Causal Model, Causal Graph, Chart Types, Directed Acyclic Graph, Visual Design, Portion Of The Dataset, Causal Structure, Prior Section, Causal Explanations, Line Graph"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10414267","time_end":"","time_stamp":"","time_start":"","title":"Using Counterfactuals to Improve Causal Inferences From Visualizations","uid":"v-cga-10414267","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10478355":{"abstract":"Recent developments in artificial intelligence (AI) and machine learning (ML) have led to the creation of powerful generative AI methods and tools capable of producing text, code, images, and other media in response to user prompts. Significant interest in the technology has led to speculation about what fields, including visualization, can be augmented or replaced by such approaches. However, there remains a lack of understanding about which visualization activities may be particularly suitable for the application of generative AI. Drawing on examples from the field, we map current and emerging capabilities of generative AI across the different phases of the visualization lifecycle and describe salient opportunities and challenges.","accessible_pdf":false,"authors":[{"affiliations":"","email":"rahul.basole@accenture.com","is_corresponding":false,"name":"Rahul C. Basole"},{"affiliations":"","email":"timothy.major@accenture.com","is_corresponding":true,"name":"Timothy Major"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Timothy Major"],"doi":"10.1109/MCG.2024.3362168","external_paper_link":"","fno":"10478355","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Generative AI, Art, Artificial Intelligence, Machine Learning, Visualization, Media, Augmented Reality, Machine Learning, Visual Representation, Professional Knowledge, Creative Process, Domain Experts, Generalization Capability, Development Of Artificial Intelligence, Artificial Intelligence Capabilities, Iterative Process, Natural Language, Commercial Software, Hallucinations, Team Sports, Design Requirements, Intelligence Agencies, Recommender Systems, User Requirements, Iterative Design, Use Of Artificial Intelligence, Visual Design, Phase Assemblage, Data Literacy"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10478355","time_end":"","time_stamp":"","time_start":"","title":"Generative AI for Visualization: Opportunities and Challenges","uid":"v-cga-10478355","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-9612019":{"abstract":"The number of online news articles available nowadays is rapidly increasing. When exploring articles on online news portals, navigation is mostly limited to the most recent ones. The spatial context and the history of topics are not immediately accessible. To support readers in the exploration or research of articles in large datasets, we developed an interactive 3D globe visualization. We worked with datasets from multiple online news portals containing up to 45,000 articles. Using agglomerative hierarchical clustering, we represent the referenced locations of news articles on a globe with different levels of detail. We employ two interaction schemes for navigating the viewpoint on the visualization, including support for hand-held devices and desktop PCs, and provide search functionality and interactive filtering. Based on this framework, we explore additional modules for jointly exploring the spatial and temporal domain of the dataset and incorporating live news into the visualization.","accessible_pdf":false,"authors":[{"affiliations":"","email":"nicholas.ingulfsen@gmail.com","is_corresponding":false,"name":"Nicholas Ingulfsen"},{"affiliations":"","email":"simone.schaub@visinf.tu-darmstadt.de","is_corresponding":false,"name":"Simone Schaub-Meyer"},{"affiliations":"","email":"grossm@inf.ethz.ch","is_corresponding":false,"name":"Markus Gross"},{"affiliations":"","email":"tobias.guenther@fau.de","is_corresponding":true,"name":"Tobias G\u00fcnther"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias G\u00fcnther"],"doi":"10.1109/MCG.2021.3127434","external_paper_link":"","fno":"9612019","has_image":false,"has_pdf":false,"image_caption":"","keywords":["News Articles, Number Of Articles, Headlines, Interactive Visualization, Online News, Agglomerative Clustering, Local News, Interactive Exploration, Desktop PC, Different Levels Of Detail, News Portals, Spatial Information, User Study, 3D Space, Human-computer Interaction, Temporal Information, Third Dimension, Tablet Computer, Pie Chart, News Stories, 3D Visualization, Article Details, Visual Point, Bottom Of The Screen, Geospatial Data, Type Of Visualization, Largest Dataset, Tagging Location, Live Feed"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-9612019","time_end":"","time_stamp":"","time_start":"","title":"News Globe: Visualization of Geolocalized News Articles","uid":"v-cga-9612019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-9745375":{"abstract":"We consider the general problem known as job shop scheduling, in which multiple jobs consist of sequential operations that need to be executed or served by appropriate machines having limited capacities. For example, train journeys (jobs) consist of moves and stops (operations) to be served by rail tracks and stations (machines). A schedule is an assignment of the job operations to machines and times where and when they will be executed. The developers of computational methods for job scheduling need tools enabling them to explore how their methods work. At a high level of generality, we define the system of pertinent exploration tasks and a combination of visualizations capable of supporting the tasks. We provide general descriptions of the purposes, contents, visual encoding, properties, and interactive facilities of the visualizations and illustrate them with images from an example implementation in air traffic management. We justify the design of the visualizations based on the tasks, principles of creating visualizations for pattern discovery, and scalability requirements. The outcomes of our research are sufficiently general to be of use in a variety of applications.","accessible_pdf":false,"authors":[{"affiliations":"","email":"gennady.andrienko@iais.fraunhofer.de","is_corresponding":true,"name":"Gennady Andrienko"},{"affiliations":"","email":"natalia.andrienko@iais.fraunhofer.de","is_corresponding":false,"name":"Natalia Andrienko"},{"affiliations":"","email":"jmcordero@e-crida.enaire.es","is_corresponding":false,"name":"Jose Manuel Cordero Garcia"},{"affiliations":"","email":"dirk.hecker@iais.fraunhofer.de","is_corresponding":false,"name":"Dirk Hecker"},{"affiliations":"","email":"georgev@unipi.gr","is_corresponding":false,"name":"George A. Vouros"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gennady Andrienko"],"doi":"10.1109/MCG.2022.3163437","external_paper_link":"","fno":"9745375","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, Schedules, Task Analysis, Optimization, Job Shop Scheduling, Data Analysis, Processor Scheduling, Iterative Methods"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-9745375","time_end":"","time_stamp":"","time_start":"","title":"Supporting Visual Exploration of Iterative Job Scheduling","uid":"v-cga-9745375","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-9866547":{"abstract":"In many applications, developed deep-learning models need to be iteratively debugged and refined to improve the model efficiency over time. Debugging some models, such as temporal multilabel classification (TMLC) where each data point can simultaneously belong to multiple classes, can be especially more challenging due to the complexity of the analysis and instances that need to be reviewed. In this article, focusing on video activity recognition as an application of TMLC, we propose DETOXER, an interactive visual debugging system to support finding different error types and scopes through providing multiscope explanations.","accessible_pdf":false,"authors":[{"affiliations":"","email":"mahsannourani@ufl.edu","is_corresponding":true,"name":"Mahsan Nourani"},{"affiliations":"","email":"chiradeep.roy@utdallas.edu","is_corresponding":false,"name":"Chiradeep Roy"},{"affiliations":"","email":"dhoneycutt@ufl.edu","is_corresponding":false,"name":"Donald R. Honeycutt"},{"affiliations":"","email":"eragan@ufl.edu","is_corresponding":false,"name":"Eric D. Ragan"},{"affiliations":"","email":"vibhav.gogate@utdallas.edu","is_corresponding":false,"name":"Vibhav Gogate"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mahsan Nourani"],"doi":"10.1109/MCG.2022.3201465","external_paper_link":"","fno":"9866547","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Debugging, Analytical Models, Heating Systems, Data Models, Computational Modeling, Activity Recognition, Deep Learning, Multi Label Classification, Visualization Tool, Temporal Classification, Visual Debugging, False Positive, False Negative, Active Components, Deep Learning Models, Types Of Errors, Video Frames, Error Detection, Detection Of Types, Action Recognition, Interactive Visualization, Sequence Of Points, Design Goals, Positive Errors, Critical Outcomes, Error Patterns, Global Panel, False Negative Rate, False Positive Rate, Heatmap, Visual Approach, Truth Labels, True Positive, Confidence Score, Anomaly Detection, Interface Elements"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-9866547","time_end":"","time_stamp":"","time_start":"","title":"DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification","uid":"v-cga-9866547","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1026":{"abstract":"We present a visual analytics approach for multi-level visual exploration of users\u2019 interaction strategies in an interactive digital environment. The use of interactive touchscreen exhibits in informal learning environments, such as museums and science centers, often incorporate frameworks that classify learning processes, such as Bloom\u2019s taxonomy, to achieve better user engagement and knowledge transfer. To analyze user behavior within these digital environments, interaction logs are recorded to capture diverse exploration strategies. However, analysis of such logs is challenging, especially in terms of coupling interactions and cognitive learning processes, and existing work within learning and educational contexts remains limited. To address these gaps, we develop a visual analytics approach for analyzing interaction logs that supports exploration at the individual user level and multi-user comparison. The approach utilizes algorithmic methods to identify similarities in users' interactions and reveal their exploration strategies. We motivate and illustrate our approach through an application scenario, using event sequences derived from interaction log data in an experimental study conducted with science center visitors from diverse backgrounds and demographics. The study involves 14 users completing tasks of increasing complexity, designed to stimulate different levels of cognitive learning processes. We implement our approach in an interactive visual analytics prototype system, named VISID, and together with domain experts, discover a set of task-solving exploration strategies, such as \"cascading\" and \"nested-loop\", which reflect different levels of learning processes from Bloom's taxonomy. Finally, we discuss the generalizability and scalability of the presented system and the need for further research with data acquired in the wild.","accessible_pdf":false,"authors":[{"affiliations":["Media and Information Technology, Norrk\u00f6ping, Sweden"],"email":"peilin.yu@liu.se","is_corresponding":true,"name":"Peilin Yu"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"aida.vitoria@liu.se","is_corresponding":false,"name":"Aida Nordman"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"marta.koc-januchta@liu.se","is_corresponding":false,"name":"Marta M. Koc-Januchta"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":false,"name":"Lonni Besan\u00e7on"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"katerina.vrotsou@liu.se","is_corresponding":false,"name":"Katerina Vrotsou"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Peilin Yu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1026","time_end":"","time_stamp":"","time_start":"","title":"Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment","uid":"v-full-1026","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1031":{"abstract":"In soccer, player scouting aims to find players suitable for a team to increase the winning chance in future matches. To scout suitable players, coaches and analysts need to consider various complicated factors, such as the players' performance in the tactics of a new team, which is hard to learn directly from their historical performance. Match simulation methods have been introduced to scout players by estimating their expected contributions to a new team. However, they usually focus on the simulation of match results and hardly support interactive analysis to navigate potential target players and compare them in fine-grained simulated behaviors. In this work, we propose a visual analytics method to assist soccer player scouting based on match simulation. We construct a two-level match simulation framework for estimating both match results and player behaviors when a player comes to a new team. Based on the framework, we develop a visual analytics system, Team-Scouter, to facilitate the simulative-based soccer player scouting process through player navigation, comparison, and explanation. With our system, coaches and analysts can find potential players suitable for the team and compare them on historical and expected performances. To explain the players' expected performances, the system provides a visual comparison between the simulated behaviors of the player and the actual ones. The usefulness and effectiveness of the system are demonstrated by two case studies on a real-world dataset and an expert interview.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"caoanqi28@163.com","is_corresponding":true,"name":"Anqi Cao"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"xxie@zju.edu.cn","is_corresponding":false,"name":"Xiao Xie"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"2366385033@qq.com","is_corresponding":false,"name":"Runjin Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"1282533692@qq.com","is_corresponding":false,"name":"Yuxin Tian"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"fanmu_032@zju.edu.cn","is_corresponding":false,"name":"Mu Fan"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhang_hui@zju.edu.cn","is_corresponding":false,"name":"Hui Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anqi Cao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1031","time_end":"","time_stamp":"","time_start":"","title":"Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting","uid":"v-full-1031","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1032":{"abstract":"Dynamic topic modeling is useful at discovering the development and change in latent topics over time. However, present methodology relies on algorithms that separate document and word representations. This prevents the creation of a meaningful embedding space where changes in word usage and documents can be directly analyzed in a temporal context. This paper proposes an expansion of the compass-aligned temporal Word2Vec methodology into dynamic topic modeling. Such a method allows for the direct comparison of word and document embeddings across time in dynamic topics. This enables the creation of visualizations that incorporate diachronic word embeddings within the context of documents into topic visualizations. In experiments against the current state-of-the-art, our proposed method demonstrates overall competitive performance in topic relevancy and diversity across temporal datasets of varying size. Simultaneously, it provides insightful visualizations focused on temporal word embeddings while maintaining the insights provided by global topic evolution, advancing our understanding of how topics evolve over time.","accessible_pdf":false,"authors":[{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"d4n1elp@vt.edu","is_corresponding":true,"name":"Daniel Palamarchuk"},{"affiliations":["Virginia Polytechnic Institute of Technology , Blacksburg, United States"],"email":"lemaraw@vt.edu","is_corresponding":false,"name":"Lemara Williams"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"bmayer@cs.vt.edu","is_corresponding":false,"name":"Brian Mayer"},{"affiliations":["Savannah River National Laboratory, Aiken, United States"],"email":"thomas.danielson@srnl.doe.gov","is_corresponding":false,"name":"Thomas Danielson"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":false,"name":"Rebecca Faust"},{"affiliations":["Savannah River National Laboratory, Aiken, United States"],"email":"larry.deschaine@srnl.doe.gov","is_corresponding":false,"name":"Larry M Deschaine PhD"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Palamarchuk"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1032","time_end":"","time_stamp":"","time_start":"","title":"Visualizing Temporal Topic Embeddings with a Compass","uid":"v-full-1032","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1039":{"abstract":"Propagation analysis refers to studying how information spreads on social media, a pivotal endeavor for understanding social sentiment and public opinions. Numerous studies contribute to visualizing information spread, but few have considered the implicit and complex diffusion patterns among multiple platforms. To bridge the gap, we collaborated with professionals to discover crucial factors that dissect the mechanism of cross-platform information spread. Based on that, we propose an information diffusion model that estimates the likelihood of a topic/post spreading among different social media platforms. Moreover, we propose a novel visual metaphor that encapsulates cross-platform patterns in a manner analogous to the spread of seeds across gardens. Specifically, we visualize social platforms, posts, implicit cross-platform routes, and salient instances as elements of a virtual ecosystem \u2014 gardens, flowers, winds, and seeds, respectively. We further develop a visual analytic system, namely BloomWind, that enables users to quickly identify the cross-platform diffusion patterns and investigate the relevant social media posts. Ultimately, we demonstrate the usage of BloomWind through two case studies and validate its effectiveness using expert interviews.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"940662579@qq.com","is_corresponding":true,"name":"Jianing Yin"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"hzjia@zju.edu.cn","is_corresponding":false,"name":"Hanze Jia"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhoubuwei@zju.edu.cn","is_corresponding":false,"name":"Buwei Zhou"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"tangtan@zju.edu.cn","is_corresponding":false,"name":"Tan Tang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"yingluu@zju.edu.cn","is_corresponding":false,"name":"Lu Ying"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"sn_ye@zju.edu.cn","is_corresponding":false,"name":"Shuainan Ye"},{"affiliations":["Michigan State University, East Lansing, United States"],"email":"pengtaiq@msu.edu","is_corresponding":false,"name":"Tai-Quan Peng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jianing Yin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1039","time_end":"","time_stamp":"","time_start":"","title":"Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts","uid":"v-full-1039","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1059":{"abstract":"When treating Head and Neck cancer patients, oncologists have to navigate a complicated series of treatment decisions for each patient. The relationship between each treatment decision and the potential tradeoff of tumor control and toxicity risk is poorly understood, leaving oncologists to largely rely on institutional knowledge and general guidelines that do not take into account specific patient circumstances. Evaluating these risks relies on a complicated understanding of several different factors such as patient health, spatial tumor spread and treatment side effect risk that can not be captured through simple heuristics. To support clinicians in better understanding tradeoffs when deciding on treatment courses, we developed DITTO, a digital-twin and visual computing system that allows clinicians to analyze nuanced patient risk for each patient and decide on an optimal treatment plan. DITTO relies on a sequential Deep Reinforcement Learning (DRL) system to deliver personalized risk of both long-term and short-term disease outcome and toxicity risk for HNC patients. Based on a participatory collaborative design alongside oncologists, we also implement several explainability methods to support clinical trust and encourage healthy skepticism when using our models. We evaluate the efficacy of our model through quantitative evaluation of model performance and case studies with qualitative feedback. Finally, we discuss design lessons for developing clinical visual XAI applications for clinical end users.","accessible_pdf":false,"authors":[{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"awentze2@uic.edu","is_corresponding":true,"name":"Andrew Wentzel"},{"affiliations":["University of Houston, Houston, United States"],"email":"skattia@mdanderson.org","is_corresponding":false,"name":"Serageldin Attia"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"zhangz@uic.edu","is_corresponding":false,"name":"Xinhua Zhang"},{"affiliations":["University of Iowa, Iowa City, United States"],"email":"guadalupe-canahuate@uiowa.edu","is_corresponding":false,"name":"Guadalupe Canahuate"},{"affiliations":["University of Texas, Houston, United States"],"email":"cdfuller@mdanderson.org","is_corresponding":false,"name":"Clifton David Fuller"},{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"g.elisabeta.marai@gmail.com","is_corresponding":false,"name":"G. Elisabeta Marai"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Andrew Wentzel"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1059","time_end":"","time_stamp":"","time_start":"","title":"DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer","uid":"v-full-1059","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1060":{"abstract":"There is increased interest in understanding the interplay between text and visuals in the field of data visualization. However, this attention has predominantly been on the use of text in standalone visualizations (such as text annotation overlays) or augmenting text stories supported by a series of independent views. In this paper, we shift from the traditional focus on single-chart annotations to characterize the nuanced but crucial communication role of text in the complex environment of interactive dashboards. Through a survey and analysis of 190 dashboards in the wild, plus 13 expert interview sessions with experienced dashboard authors, we highlight the distinctive nature of text as an integral component of the dashboard experience, while delving into the categories, semantic levels, and functional roles of text, and exploring how these text elements are coalesced by dashboard authors to guide and inform dashboard users. Our contributions are threefold. First, we distill qualitative and quantitative findings from our studies to characterize current practices of text use in dashboards, including a categorization of text-based components and design patterns. Second, we leverage current practices and existing literature to propose, discuss, and validate recommended practices for text in dashboards, embodied as a set of 12 heuristics that underscore the semantic and functional role of text in offering navigational cues, contextualizing data insights, supporting reading order, among other concerns. Third, we reflect on our findings plus existing literature to identify gaps and propose opportunities for data visualization researchers to push the boundaries on text usage for dashboards, from authoring support and interactivity to text generation and content personalization. Our research underscores the significance of elevating text as a first-class citizen in data visualization, and the need to support the inclusion of textual components and their interactive affordances in dashboard design.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"nicole.sultanum@gmail.com","is_corresponding":true,"name":"Nicole Sultanum"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicole Sultanum"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1060","time_end":"","time_stamp":"","time_start":"","title":"From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards","uid":"v-full-1060","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1063":{"abstract":"While previous work has found success in deploying visualizations as museum exhibits, it has not investigated whether museum context impacts visitor behaviour with these exhibits. We present an interactive Deep-time Literacy Visualization Exhibit (DeLVE) to help museum visitors understand deep time (lengths of extremely long geological processes) by improving proportional reasoning skills through comparison of different time periods. DeLVE uses a new visualization idiom, Connected Multi-Tier Ranges, to visualize curated datasets of past events across multiple scales of time, relating extreme scales with concrete scales that have more familiar magnitudes and units. Museum staff at three separate museums approved the deployment of DeLVE as a digital kiosk, and devoted time to curating a unique dataset in each of them. We collect data from two sources, an observational study and system trace logs. We discuss the importance of context: similar museum exhibits in different contexts were received very differently by visitors. We additionally discuss differences in our process from Sedlmair et al.'s design study methodology which is focused on design studies triggered by connection with collaborators rather than the discovery of a concept to communicate. Supplemental materials are available at: https://osf.io/z53dq/?view_only=4df33aad207144aca149982412125541","accessible_pdf":false,"authors":[{"affiliations":["The University of British Columbia, Vancouver, Canada"],"email":"marasolen@gmail.com","is_corresponding":true,"name":"Mara Solen"},{"affiliations":["University of British Columbia , Vancouver, Canada"],"email":"sultananigar70@gmail.com","is_corresponding":false,"name":"Nigar Sultana"},{"affiliations":["University of British Columbia, Vancouver, Canada"],"email":"laura.lukes@ubc.ca","is_corresponding":false,"name":"Laura A. Lukes"},{"affiliations":["University of British Columbia, Vancouver, Canada"],"email":"tmm@cs.ubc.ca","is_corresponding":false,"name":"Tamara Munzner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mara Solen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1063","time_end":"","time_stamp":"","time_start":"","title":"DeLVE into Earth\u2019s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts","uid":"v-full-1063","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1067":{"abstract":"Large Language Models (LLMs), such as ChatGPT and Llama, have revolutionized various domains through their impressive natural language processing capabilities. However, their deployment raises significant ethical and security concerns, including their potential misuse for generating fake news or aiding illegal activities. Thus, ensuring the development of secure and trustworthy LLMs is crucial. Traditional red teaming approaches for identifying vulnerabilities in AI models are limited by their reliance on manual prompt construction and expertise. This paper introduces a novel visual analytics system, AdversaFlow, designed to enhance the security of LLMs against adversarial attacks through human-AI collaboration. Our system, which involves adversarial training between a target model and a red model, is equipped with a unique multi-level adversarial flow visualization and a fluctuation path visualization technique. These features provide a detailed insight into the adversarial dynamics and the robustness of LLMs, thereby enabling AI security experts to identify and mitigate vulnerabilities effectively. We deliver quantitative evaluations for the models and present case studies that validate the utility of our system and share insights for future AI security solutions. Our contributions include a human-AI collaboration framework for LLM red teaming, a comprehensive visual analytics system to support adversarial pattern presentation and fluctuation analysis, and valuable lessons learned in visual analytics for AI security.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Ningbo, China"],"email":"dengdazhen@outlook.com","is_corresponding":true,"name":"Dazhen Deng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhangchuhan024@163.com","is_corresponding":false,"name":"Chuhan Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"huawzheng@gmail.com","is_corresponding":false,"name":"Huawei Zheng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"yw.pu@zju.edu.cn","is_corresponding":false,"name":"Yuwen Pu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"sji@zju.edu.cn","is_corresponding":false,"name":"Shouling Ji"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dazhen Deng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1067","time_end":"","time_stamp":"","time_start":"","title":"AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow","uid":"v-full-1067","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1077":{"abstract":"A growing body of work draws on feminist thinking to challenge assumptions about how people engage with and use visualizations. This work draws on feminist values, driving design and research guidelines that account for the influences of power and neglect. This prior work is largely prescriptive, however, forgoing articulation of how feminist theories of knowledge \u2014 or feminist epistemology \u2014 can alter research design and outcomes. At the core of our work is an engagement with feminist epistemology, drawing attention to how a new framework for how we know what we know enabled us to overcome intellectual tensions in our research. Specifically, we focus on the theoretical concept of entanglement, central to recent feminist scholarship, and contribute: a history of entanglement in the broader scope of feminist theory; an articulation of the main points of entanglement theory for a visualization context; and a case study of research outcomes as evidence of the potential of feminist epistemology to impact visualization research. This work answers a call in the community to embrace a broader set of theoretical and epistemic foundations and provides a starting point for bringing different theories into visualization research.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"derya.akbaba@liu.se","is_corresponding":true,"name":"Derya Akbaba"},{"affiliations":["Emory University, Atlanta, United States"],"email":"lauren.klein@emory.edu","is_corresponding":false,"name":"Lauren Klein"},{"affiliations":["Link\u00f6ping University, N\u00f6rrkoping, Sweden"],"email":"miriah.meyer@liu.se","is_corresponding":false,"name":"Miriah Meyer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Derya Akbaba"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1077","time_end":"","time_stamp":"","time_start":"","title":"Entanglements for Visualization: Changing Research Outcomes through Feminist Theory","uid":"v-full-1077","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1096":{"abstract":"Large Language Models (LLMs) have shown great potential in intelligent visualization systems, especially for domain-specific applications. Integrating LLMs into visualization systems presents challenges, and we categorize these challenges into three alignments: domain problems with LLMs, visualization with LLMs, and interaction with LLMs. To achieve these alignments, we propose a framework and outline a workflow to guide the application of fine-tuned LLMs to enhance visual interactions for domain-specific tasks. These alignment challenges are critical in education as they call for an intelligent visualization system to support beginners' self-regulated learning. Therefore, we apply the framework to education and introduce Tailor-Mind, an interactive visualization system designed to facilitate self-regulated learning for artificial intelligence beginners. Drawing on insights from a preliminary study, we identify self-regulated learning tasks and fine-tuning objectives to guide visualization design and tuning data construction. Our focus on aligning visualization with fine-tuned LLM makes Tailor-Mind more like a personalized tutor. Tailor-Mind also supports interactive recommendations to help beginners better achieve their learning goals. Model performance evaluations and user studies confirm that Tailor-Mind improves the self-regulated learning experience, effectively validating the proposed framework.","accessible_pdf":false,"authors":[{"affiliations":["Fudan University, Shanghai, China"],"email":"lgao.lynne@gmail.com","is_corresponding":true,"name":"Lin Gao"},{"affiliations":["Fudan University, ShangHai, China"],"email":"kingluther6666@gmail.com","is_corresponding":false,"name":"Jing Lu"},{"affiliations":["Fudan University, Shanghai, China"],"email":"gemini25szk@gmail.com","is_corresponding":false,"name":"Zekai Shao"},{"affiliations":["Fudan University, Shanghai, China"],"email":"ziyuelin917@gmail.com","is_corresponding":false,"name":"Ziyue Lin"},{"affiliations":["Fudan unversity, ShangHai, China"],"email":"sbyue23@m.fudan.edu.cn","is_corresponding":false,"name":"Shengbin Yue"},{"affiliations":["Fudan University, Shanghai, China"],"email":"chiokit0819@gmail.com","is_corresponding":false,"name":"Chiokit Ieong"},{"affiliations":["Fudan University, Shanghai, China"],"email":"21307130094@m.fudan.edu.cn","is_corresponding":false,"name":"Yi Sun"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"rory.james.zauner@univie.ac.at","is_corresponding":false,"name":"Rory Zauner"},{"affiliations":["Fudan University, Shanghai, China"],"email":"zywei@fudan.edu.cn","is_corresponding":false,"name":"Zhongyu Wei"},{"affiliations":["Fudan University, Shanghai, China"],"email":"simingchen3@gmail.com","is_corresponding":false,"name":"Siming Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lin Gao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1096","time_end":"","time_stamp":"","time_start":"","title":"Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education","uid":"v-full-1096","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1099":{"abstract":"Tactics play an important role in team sports by guiding how players interact on the field. Both sports fans and experts have a demand for analyzing sports tactics. Existing approaches allow users to visually perceive the multivariate tactical effects. However, these approaches usually consider each tactic as a whole, making it difficult for users to connect the complex interactions inside each tactic to the final tactical effect. In this work, we collaborate with basketball experts and propose a progressive approach to help users gain a deeper understanding of how each tactic works and customize tactics on demand. Users can progressively sketch on a tactic board, and a coach agent will simulate the possible actions in each step and present the simulation to users with facet visualizations. We develop an extensible framework that integrates large language models (LLMs) and visualizations to help users communicate with the coach agent with multimodal inputs. Based on the framework, we design and develop Smartboard, an agent-based interactive visualization system for fine-grained tactical analysis. Smartboard provides users with a structured process of setup, simulation, and evolution, allowing for iterative exploration of tactics based on specific personalized scenarios. We conduct case studies based on real-world basketball datasets to demonstrate the usefulness of our system.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ziao_liu@outlook.com","is_corresponding":true,"name":"Ziao Liu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"xxie@zju.edu.cn","is_corresponding":false,"name":"Xiao Xie"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"3170101799@zju.edu.cn","is_corresponding":false,"name":"Moqi He"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhao_ws@zju.edu.cn","is_corresponding":false,"name":"Wenshuo Zhao"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"wuyihong0606@gmail.com","is_corresponding":false,"name":"Yihong Wu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"lycheecheng@zju.edu.cn","is_corresponding":false,"name":"Liqi Cheng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhang_hui@zju.edu.cn","is_corresponding":false,"name":"Hui Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ziao Liu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1099","time_end":"","time_stamp":"","time_start":"","title":"Smartboard: Visual Exploration of Team Tactics with LLM Agent","uid":"v-full-1099","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1100":{"abstract":"\u201cCorrelation does not imply causation\u201d is a famous mantra in statistical and visual analysis. However, consumers of visualizations often draw causal conclusions when only correlations between variables are shown. In this paper, we investigate factors that contribute to causal relationships users perceive in visualizations. We collected a corpus of concept pairs from variables in widely used datasets and created visualizations that depict varying correlative associations using three typical statistical chart types. We conducted two MTurk studies on (1) preconceived notions on causal relations without charts, and (2) perceived causal relations with charts, for each concept pair. Our results indicate that people make assumptions about causal relationships between pairs of concepts even without seeing any visualized data. Moreover, our results suggest that these assumptions constitute causal priors that, in combination with chart type and visualized association, impact how data visualizations are interpreted. The results also suggest that causal priors may lead to over- or under-estimation in perceived causal relations in different circumstances, and that those priors can also impact users\u2019 confidence in their causal assessments. Using data from the studies, we develop a model to capture the interaction between causal priors and visualized associations as they combine to impact a user\u2019s perceived causal relations. In addition to reporting the study results and analyses, we provide an open dataset of causal priors for 56 specific concept pairs that can serve as a potential benchmark for future studies. We also suggest heuristic-based guidelines to help designers improve visualization design choices to better support visual causal inference.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":true,"name":"Arran Zeyu Wang"},{"affiliations":["UNC-Chapel Hill, Chapel Hill, United States"],"email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":["Davidson College, Davidson, United States"],"email":"tapeck@davidson.edu","is_corresponding":false,"name":"Tabitha C. Peck"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"vaapad@live.unc.edu","is_corresponding":false,"name":"Wenyuan Wang"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arran Zeyu Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1100","time_end":"","time_stamp":"","time_start":"","title":"Causal Priors and Their Influence on Judgements of Causality in Visualized Data","uid":"v-full-1100","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1121":{"abstract":"Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.","accessible_pdf":false,"authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jykim@hcil.snu.ac.kr","is_corresponding":true,"name":"Jaeyoung Kim"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"sihyeon@hcil.snu.ac.kr","is_corresponding":false,"name":"Sihyeon Lee"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"hj@hcil.snu.ac.kr","is_corresponding":false,"name":"Hyeon Jeon"},{"affiliations":["Korea University Guro Hospital, Seoul, Korea, Republic of"],"email":"gooday19@gmail.com","is_corresponding":false,"name":"Keon-Joo Lee"},{"affiliations":["Hankuk University of Foreign Studies, Yongin-si, Korea, Republic of"],"email":"bkim@hufs.ac.kr","is_corresponding":false,"name":"Bohyoung Kim"},{"affiliations":["Seoul National University Bundang Hospital, Seongnam, Korea, Republic of"],"email":"braindoc@snu.ac.kr","is_corresponding":false,"name":"HEE JOON"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jaeyoung Kim"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1121","time_end":"","time_stamp":"","time_start":"","title":"PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets","uid":"v-full-1121","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1128":{"abstract":"Citations allow quickly identifying related research. If multiple publications are selected as seeds, specific suggestions for related literature can be made based on the number of incoming and outgoing citation links to this selection. Interactively adding recommended publications to the selection refines the next suggestion and incrementally builds a relevant collection of publications. Following this approach, the paper presents a search and foraging approach, PUREsuggest, which combines citation-based suggestions with augmented visualizations of the citation network. The focus and novelty of the approach is, first, the transparency of how the rankings are explained visually and, second, that the process can be steered through user-defined keywords, which reflect topics of interests. The system can be used to build new literature collections, to update and assess existing ones, as well as to use the collected literature for identifying relevant experts in the field. We evaluated the recommendation approach through simulated sessions and performed a user study investigating search strategies and usage patterns supported by the interface.","accessible_pdf":false,"authors":[{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"fabian.beck@uni-bamberg.de","is_corresponding":true,"name":"Fabian Beck"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fabian Beck"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1128","time_end":"","time_stamp":"","time_start":"","title":"PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings","uid":"v-full-1128","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1137":{"abstract":"Inspired by recent advances in digital fabrication, artists and scientists have demonstrated that physical data encodings (i.e., data physicalizations) can increase engagement with data, foster collaboration, and in some cases, improve data legibility and analysis relative to digital alternatives. However, prior empirical studies have only investigated abstract data encoded in physical form (e.g., laser cut bar charts) and not continuously sampled spatial data fields relevant to climate and medical science (e.g., heights, temperatures, densities, and velocities sampled on a spatial grid). This paper presents the design and results of the first study to characterize human performance in 3D spatial data analysis tasks across analogous physical and digital visualizations. Participants analyzed continuous spatial elevation data with three visualization modalities: (1) 2D digital visualization; (2) perspective-tracked, stereoscopic \"fishtank\" virtual reality; and (3) 3D printed data physicalization. Their tasks included tracing paths downhill, looking up spatial locations and comparing their relative heights, and identifying and reporting the minimum and maximum heights within certain spatial regions. As hypothesized, in most cases, participants performed the tasks just as well or better in the physical modality (based on time and error metrics). Additional results include an analysis of open-ended feedback from participants and discussion of implications for further research on the value of data physicalization. All data and supplemental materials are available at https://osf.io/7xdq4/?view_only=7416f8cfca85473889456fb69527abbc","accessible_pdf":false,"authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"bridger.g.herman@gmail.com","is_corresponding":true,"name":"Bridger Herman"},{"affiliations":["Beth Israel Deaconess Medical Center, Boston, United States"],"email":"cdjackso@bidmc.harvard.edu","is_corresponding":false,"name":"Cullen D. Jackson"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"dfk@umn.edu","is_corresponding":false,"name":"Daniel F. Keefe"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Bridger Herman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1137","time_end":"","time_stamp":"","time_start":"","title":"Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks","uid":"v-full-1137","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1140":{"abstract":"Written language is a useful mode for non-visual creative activities like writing essays and planning searches. This paper investigates the integration of written language into the visualization design process. We call this idea a `written rudder,' , since it acts as a guiding force or strategy for the design. Via an interview study of 24 working visualization designers, we first established that only a minority of participants systematically use written rudders to aid in design. A second study with 15 visualization designers examined four different variants of rudders: asking questions, stating conclusions, composing a narrative, and writing titles. Overall, participants had a positive reaction; designers recognized the benefits of explicitly writing down components of the design and indicated that they would use this approach in future design work. More specifically, two approaches \u2013- writing questions and writing conclusions/takeaways \u2013- were seen as beneficial across the design process, while writing narratives showed promise mainly for the creation stage. Although concerns around potential bias during data exploration were raised, participants also discussed strategies to mitigate such concerns. This paper contributes to a deeper understanding of the interplay between language and visualization, and proposes a straightforward, lightweight addition to the visualization design process.","accessible_pdf":false,"authors":[{"affiliations":["UC Berkeley, Berkeley, United States"],"email":"chase_stokes@berkeley.edu","is_corresponding":true,"name":"Chase Stokes"},{"affiliations":["Self, Berkeley, United States"],"email":"clarahu@berkeley.edu","is_corresponding":false,"name":"Clara Hu"},{"affiliations":["UC Berkeley, Berkeley, United States"],"email":"hearst@berkeley.edu","is_corresponding":false,"name":"Marti Hearst"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chase Stokes"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1140","time_end":"","time_stamp":"","time_start":"","title":"It's a Good Idea to Put It Into Words: Writing 'Rudders' in the Initial Stages of Visualization Design","uid":"v-full-1140","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1142":{"abstract":"To deploy machine learning (ML) models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many compression experiments, identifying subtle changes in model behavior, and negotiating complex accuracy-efficiency trade-offs. However, existing compression tools poorly support comparison, leading to tedious and, sometimes, incomplete analyses spread across disjoint tools. To support real-world comparative workflows, we develop an interactive visual system called Compress & Compare. Within a single interface, Compress & Compare surfaces promising compression strategies by visualizing provenance relationships between compressed models and reveals compression-induced behavior changes by comparing models' predictions, weights, and activations. We demonstrate how Compress & Compare supports common compression analysis tasks through two case studies\u2014debugging failed compression on generative language models and identifying compression-induced biases in image classification. We further evaluate Compress & Compare in a user study with eight compression experts, illustrating its potential to provide structure to compression workflows, help practitioners build intuition about compression, and encourage thorough analysis of compression\u2019s effect on model behavior. Through these evaluations, we identify compression-specific challenges that future visual analytics tools should consider and Compress & Compare visualizations that may generalize to broader model comparison tasks.","accessible_pdf":false,"authors":[{"affiliations":["Massachusetts Institute of Technology, Cambridge, United States"],"email":"aboggust@mit.edu","is_corresponding":true,"name":"Angie Boggust"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"vsivaram@andrew.cmu.edu","is_corresponding":false,"name":"Venkatesh Sivaraman"},{"affiliations":["Apple, Cambridge, United States"],"email":"yassogba@gmail.com","is_corresponding":false,"name":"Yannick Assogba"},{"affiliations":["Apple, Seattle, United States"],"email":"donghao@apple.com","is_corresponding":false,"name":"Donghao Ren"},{"affiliations":["Apple, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["Apple, Seattle, United States"],"email":"fred.hohman@gmail.com","is_corresponding":false,"name":"Fred Hohman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Angie Boggust"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1142","time_end":"","time_stamp":"","time_start":"","title":"Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments","uid":"v-full-1142","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1147":{"abstract":"Large Language Models (LLMs) like GPT-4 which support multimodal input (i.e., prompts containing images in addition to text) have immense potential to advance visualization research. However, many questions exist about the visual capabilities of such models, including how well they can read and interpret visually represented data. In our work, we address this question by evaluating the GPT-4 multimodal LLM using a suite of task sets meant to assess the model\u2019s visualization literacy. The task sets are based on existing work in the visualization community addressing both automated chart question answering and human visualization literacy across multiple settings. Our assessment finds that GPT-4 can perform tasks such as recognizing trends and extreme values, and also demonstrates some understanding of visualization design best-practices. By contrast, GPT-4 struggles with simple value retrieval when not provided with the original dataset, lacks the ability to reliably distinguish between colors in charts, and occasionally suffers from hallucination and inconsistency. We conclude by reflecting on the model\u2019s strengths and weaknesses as well as the potential utility of models like GPT-4 for future visualization research. We also release all code, stimuli, and results for the task sets at the following link: (REDACTED FOR REVIEW)","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"abendeck3@gatech.edu","is_corresponding":true,"name":"Alexander Bendeck"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"john.stasko@cc.gatech.edu","is_corresponding":false,"name":"John Stasko"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alexander Bendeck"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1147","time_end":"","time_stamp":"","time_start":"","title":"An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks","uid":"v-full-1147","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1150":{"abstract":"Composite visualization represents a widely embraced design that combines multiple visual representations to create an integrated view. However, the traditional approach of creating composite visualizations in immersive environments typically occurs asynchronously outside of the immersive space and is carried out by experienced experts. In this work, we take the first step to empower users to participate in the creation of composite visualization within immersive environments through embodied interactions. This could provide a flexible and fluid experience for data exploration and facilitate a deep understanding of the relationship between data visualizations. We begin with forming a design space of embodied interactions to create various types of composite visualizations with the consideration of data relationships. Drawing inspiration from people's natural experience of manipulating physical objects, we design interactions to directly assemble composite visualizations in immersive environments. Building upon the design space, we present a series of case studies showcasing the interactive method to create different kinds of composite visualizations in Virtual Reality (VR). Subsequently, we conduct a user study to evaluate the usability of the derived interaction techniques and user experience of embodiedly creating composite visualizations. We find that empowering users to participate in composite visualizations through embodied interactions enables them to flexibly leverage different visualization representations for understanding and communicating the relationships between different views, which underscores the potential for a set of application scenarios in the future.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China","The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"qzhual@connect.ust.hk","is_corresponding":true,"name":"Qian Zhu"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States","Georgia Institute of Technology, Atlanta, United States"],"email":"luttul@umich.edu","is_corresponding":false,"name":"Tao Lu"},{"affiliations":["Adobe Research, San Jose, United States","Adobe Research, San Jose, United States"],"email":"sguo@adobe.com","is_corresponding":false,"name":"Shunan Guo"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong","Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"mxj@cse.ust.hk","is_corresponding":false,"name":"Xiaojuan Ma"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States","Georgia Institute of Technology, Atlanta, United States"],"email":"yalongyang@hotmail.com","is_corresponding":false,"name":"Yalong Yang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qian Zhu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1150","time_end":"","time_stamp":"","time_start":"","title":"CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments","uid":"v-full-1150","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1153":{"abstract":"Points of interest on a map such as restaurants, hotels, or subway stations, give rise to categorical point data: data that have a fixed location and one or more categorical attributes. Consequently, recent years have seen various set visualization approaches that visually connect points of the same category to support users in understanding the spatial distribution of categories. Existing methods use complex and often highly irregular shapes to connect points of the same category, leading to high cognitive load for the user. In this paper we introduce SimpleSets that use simple shapes to enclose categorical point patterns and provide a low-complexity overview of the data distribution. We give formal definitions of point patterns that correspond to simple shapes and describe an algorithm that partitions categorical points into few such patterns. Our second contribution is a rendering algorithm that transforms a given partition into a clean set of shapes resulting in an aesthetically pleasing set visualization. Our algorithm pays particular attention to resolving intersections between nearby shapes in a consistent manner. We compare SimpleSets to the state-of-the-art set visualizations using standard datasets from the literature. SimpleSets are designed to visualize disjoint categories, however, we discuss avenues to extend our technique to overlapping set systems.","accessible_pdf":false,"authors":[{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"s.w.v.d.broek@tue.nl","is_corresponding":true,"name":"Steven van den Broek"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"w.meulemans@tue.nl","is_corresponding":false,"name":"Wouter Meulemans"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"b.speckmann@tue.nl","is_corresponding":false,"name":"Bettina Speckmann"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Steven van den Broek"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1153","time_end":"","time_stamp":"","time_start":"","title":"SimpleSets: Capturing Categorical Point Patterns with Simple Shapes","uid":"v-full-1153","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1155":{"abstract":"Interactive visualizations are powerful tools for Exploratory Data Analysis (EDA), but how do they affect the observations analysts make about their data? We conducted a qualitative experiment with 13 professional data scientists analyzing two datasets within Jupyter notebooks, collecting a rich dataset of interaction traces and think-aloud utterances. By qualitatively analyzing participant verbalizations, we introduce the concept of \"observation-analysis states.\" These states capture both the dataset characteristics a participant focuses on and the insights they express. Our definition reveals that interactive visualizations on average lead to earlier and more complex insights about relationships between dataset attributes compared to static visualizations. Moreover, this process identified new measures for studying representation use in notebooks such as hover time, revisiting rate and representational diversity. In particular, revisiting rates revealed behavior where analysts revisit particular representations throughout the time course of an analysis, serving more as navigational aids through an EDA than as strict hypothesis answering tools. We show how these measures helped identify other patterns of analysis behavior, such as the \"80-20 rule\", where a small subset of representations drove the majority of observations. Based on these findings, we offer design guidelines for interactive exploratory analysis tooling and reflect on future directions for studying the role that visualizations play in EDA.","accessible_pdf":false,"authors":[{"affiliations":["MIT, Cambridge, United States"],"email":"dwootton@mit.edu","is_corresponding":true,"name":"Dylan Wootton"},{"affiliations":["MIT, Cambridge, United States"],"email":"amyraefoxphd@gmail.com","is_corresponding":false,"name":"Amy Rae Fox"},{"affiliations":["University of Colorado Boulder, Boulder, United States"],"email":"evan.peck@colorado.edu","is_corresponding":false,"name":"Evan Peck"},{"affiliations":["MIT, Cambridge, United States"],"email":"arvindsatya@mit.edu","is_corresponding":false,"name":"Arvind Satyanarayan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dylan Wootton"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1155","time_end":"","time_stamp":"","time_start":"","title":"Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.","uid":"v-full-1155","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1179":{"abstract":"Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful tools for solving complex optimization problems characterized by multiple, often conflicting, objectives. While advancements have been made in computational efficiency as well as diversity and convergence of solutions, a critical challenge persists: the internal evolutionary mechanisms are opaque to human users. Drawing upon the successes of explainable AI in explaining complex algorithms and models, we argue that the need to understand the underlying evolutionary operators and population dynamics in MOEAs aligns well with a visual analytics paradigm. This paper introduces ParetoTracker, a visual analytics framework designed to support the comprehension and inspection of population dynamics in the evolutionary processes of MOEAs. Informed by preliminary literature review and expert interviews, the framework establishes a multi-level analysis scheme, which caters to user engagement and exploration ranging from examining overall trends in performance metrics to conducting fine-grained inspections of evolutionary operations. In contrast to conventional practices that require manual plotting of solutions for each generation, ParetoTracker facilitates the examination of temporal trends and dynamics across consecutive generations in an integrated visual interface. The effectiveness of the framework is demonstrated through case studies and expert interviews focused on widely adopted benchmark optimization problems.","accessible_pdf":false,"authors":[{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"zhangzr32021@mail.sustech.edu.cn","is_corresponding":false,"name":"Zherui Zhang"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"yangf2020@mail.sustech.edu.cn","is_corresponding":false,"name":"Fan Yang"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"ranchengcn@gmail.com","is_corresponding":false,"name":"Ran Cheng"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"mayx@sustech.edu.cn","is_corresponding":true,"name":"Yuxin Ma"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuxin Ma"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1179","time_end":"","time_stamp":"","time_start":"","title":"ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics","uid":"v-full-1179","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1185":{"abstract":"This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who are unfamiliar with these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn unfamiliar network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then mines the underlying data patterns, and eventually explains both visual and data patterns present in the viewer\u2019s selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to only textual and only visual (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology.","accessible_pdf":false,"authors":[{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":true,"name":"Xinhuan Shu"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"alexis.pister@hotmail.com","is_corresponding":false,"name":"Alexis Pister"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"tangjunxiu@zju.edu.cn","is_corresponding":false,"name":"Junxiu Tang"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":false,"name":"Fanny Chevalier"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xinhuan Shu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1185","time_end":"","time_stamp":"","time_start":"","title":"Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations","uid":"v-full-1185","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1193":{"abstract":"Emerging multimodal large language models (MLLMs) exhibit great potential for chart question answering (CQA). Recent efforts primarily focus on scaling up training datasets (\\ie, charts, data tables, and question-answer (QA) pairs) through data collection and synthesis. However, our empirical study on existing MLLMs and CQA datasets reveals notable gaps. First, current data collection and synthesis focus on data volume and lack consideration of fine-grained visual encodings and QA tasks, resulting in unbalanced data distribution divergent from practical CQA scenarios. Second, existing work follows the training recipe of the base MLLMs initially designed for natural images, under-exploring the adaptation to unique chart characteristics, such as rich text elements. To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. Specifically, we propose a novel data engine to effectively filter diverse and high-quality data from existing datasets and subsequently refine and augment the data using LLM-based generation techniques to better align with practical QA tasks and visual encodings. Then, to facilitate the adaptation to chart characteristics, we utilize the enriched data to train an MLLM by unfreezing the vision encoder and incorporating a mixture-of-resolution adaptation strategy for enhanced fine-grained recognition. Experimental results validate the effectiveness of our approach. Even with fewer training examples, our model consistently outperforms state-of-the-art CQA models on established benchmarks. We also contribute a dataset split as a benchmark for future research.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"xingchen.zeng@outlook.com","is_corresponding":true,"name":"Xingchen Zeng"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"hlin386@connect.hkust-gz.edu.cn","is_corresponding":false,"name":"Haichuan Lin"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"yyebd@connect.ust.hk","is_corresponding":false,"name":"Yilin Ye"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China","The Hong Kong University of Science and Technology, Hong Kong SAR, China"],"email":"weizeng@hkust-gz.edu.cn","is_corresponding":false,"name":"Wei Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xingchen Zeng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1193","time_end":"","time_stamp":"","time_start":"","title":"Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning","uid":"v-full-1193","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1202":{"abstract":"The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and visual data analysis tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with several variables including personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual's personality traits.","accessible_pdf":false,"authors":[{"affiliations":["Emory University, Atlanta, United States"],"email":"mengyu.chen@emory.edu","is_corresponding":true,"name":"Mengyu Chen"},{"affiliations":["Emory University, Atlanta, United States"],"email":"yijun.liu2@emory.edu","is_corresponding":false,"name":"Yijun Liu"},{"affiliations":["Emory University, Atlanta, United States"],"email":"emily.wall@emory.edu","is_corresponding":false,"name":"Emily Wall"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mengyu Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1202","time_end":"","time_stamp":"","time_start":"","title":"Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis","uid":"v-full-1202","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1204":{"abstract":"We present ProvenanceWidgets, a Javascript library of UI control elements such as radio buttons, checkboxes, and dropdowns to track and dynamically overlay a user's analytic provenance. These in situ overlays not only save screen space but also minimize the amount of time and effort needed to access the same information from elsewhere in the UI. In this paper, we discuss how we design modular UI control elements to track how often and how recently a user interacts with them and design visual overlays showing an aggregated summary as well as a detailed temporal history. We demonstrate the capability of ProvenanceWidgets by recreating three prior widget libraries: (1) Scented Widgets, (2) Phosphor objects, and (3) Dynamic Query Widgets. We also evaluated its expressiveness and conducted case studies with visualization developers to evaluate its effectiveness. We find that ProvenanceWidgets enables developers to implement custom provenance-tracking applications effectively. ProvenanceWidgets is available as open-source software at https://github.com/ProvenanceWidgets to help application developers build custom provenance-based systems.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"arpitnarechania@gatech.edu","is_corresponding":true,"name":"Arpit Narechania"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"kaustubhodak1@gmail.com","is_corresponding":false,"name":"Kaustubh Odak"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"melassady@ai.ethz.ch","is_corresponding":false,"name":"Mennatallah El-Assady"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"endert@gatech.edu","is_corresponding":false,"name":"Alex Endert"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arpit Narechania"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1204","time_end":"","time_stamp":"","time_start":"","title":"ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance","uid":"v-full-1204","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1214":{"abstract":"Graphs are often used to model relationships between entities. The identification and visualization of clusters in graphs enable insight discovery in many application areas, such as life sciences and social sciences. Force-directed graph layout algorithms promote the visual saliency of clusters, as they generally bring adjacent nodes closer together, and push non-adjacent nodes apart. In this work, we study the impact of node ordering on the visual saliency of clusters in orderable node-link diagrams, namely radial diagrams, arc diagrams and symmetric arc diagrams. Through a crowdsourced controlled experiment, we show that users can count clusters consistently more accurately, and to a large extent faster, with orderable node-link diagrams than with three state-of-the art force-directed layout algorithms, i.e., `Linlog', `Backbone' and, `sfdp'. The measured advantage is greater in case of low cluster separability and/or low compactness. A free copy of this paper and all supplemental materials are available at https://osf.io/kc3dg/?view_only=892f7b96752e40a6baefb2e50e866f9d","accessible_pdf":false,"authors":[{"affiliations":["Luxembourg Institute of Science and Technology, Esch-sur-Alzette, Luxembourg"],"email":"nora.alnaami@list.lu","is_corresponding":false,"name":"Nora Al-Naami"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"nicolas.medoc@list.lu","is_corresponding":false,"name":"Nicolas Medoc"},{"affiliations":["Uppsala University, Uppsala, Sweden"],"email":"matteo.magnani@it.uu.se","is_corresponding":false,"name":"Matteo Magnani"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"mohammad.ghoniem@list.lu","is_corresponding":true,"name":"Mohammad Ghoniem"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mohammad Ghoniem"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1214","time_end":"","time_stamp":"","time_start":"","title":"Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts","uid":"v-full-1214","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1218":{"abstract":"Placing text labels is a common way to explain key elements in a given scene. Given a graphic input and original label information, how to place labels to meet both geometric and aesthetic requirements is an open challenging problem. Geometry-wise, traditional rule-driven solutions struggle to capture the complex interactions between labels, let alone consider graphical/appearance content. In terms of aesthetics, training/evaluation data ideally require nontrivial effort and expertise in design, thus resulting in a lack of decent datasets for learning-based methods. To address the above challenges, we formulate the task with a graph representation, where nodes correspond to labels and edges to the between-label interactions, and treat label placement as a node position prediction problem. With this novel representation, we design a Label Placement Graph Transformer (LPGT) to predict label positions. Specifically, edge-level attention, conditioned on node representations, is introduced to reveal potential relationships between labels. To integrate graphic/image information, we design a feature aligning strategy that extracts deep features for nodes and edges efficiently. Next, to address the dataset issue, we collect commercial illustrations with professionally designed label layouts from household appliance manuals, and annotate them with useful information to create a novel dataset named the Appliance Manual Illustration Labels (AMIL) dataset. In the thorough evaluation on AMIL, our LPGT solution achieves promising label placement performance compared with popular baselines.","accessible_pdf":false,"authors":[{"affiliations":["Southwest University, Beibei, China"],"email":"qujingwei@swu.edu.cn","is_corresponding":true,"name":"Jingwei Qu"},{"affiliations":["Southwest University, Chongqing, China"],"email":"z2211973606@email.swu.edu.cn","is_corresponding":false,"name":"Pingshun Zhang"},{"affiliations":["Southwest University, Beibei, China"],"email":"enyuche@gmail.com","is_corresponding":false,"name":"Enyu Che"},{"affiliations":["COLLEGE OF COMPUTER AND INFORMATION SCIENCE, SOUTHWEST UNIVERSITY SCHOOL OF SOFTWAREC, Chongqin, China"],"email":"out1147205215@outlook.com","is_corresponding":false,"name":"Yinan Chen"},{"affiliations":["Stony Brook University, New York, United States"],"email":"hling@cs.stonybrook.edu","is_corresponding":false,"name":"Haibin Ling"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jingwei Qu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1218","time_end":"","time_stamp":"","time_start":"","title":"Graph Transformer for Label Placement","uid":"v-full-1218","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1232":{"abstract":"How do cancer cells grow, divide, proliferate and die? How do drugs influence these processes? These are difficult questions that we can attempt to answer with a combination of time-series microscopy experiments, classification algorithms, and data visualization. However, collecting this type of data and applying algorithms to segment and track cells and construct lineages of proliferation is error-prone; and identifying the errors can be challenging since it often requires cross-checking multiple data types. Similarly, analyzing and communicating the results necessitates synthesizing different data types into a single narrative. State-of-the-art visualization methods for such data use independent line charts, tree diagrams, and images in separate views. However, this spatial separation requires the viewer of these charts to combine the relevant pieces of data in memory. To simplify this challenging task, we describe design principles for weaving cell images, time-series data, and tree data into a cohesive visualization. Our design principles are based on choosing a primary data type that drives the layout and integrates the other data types into that layout. We then introduce Aardvark, a system that uses these principles to implement novel visualization techniques. Based on Aardvark, we demonstrate the utility of each of these approaches for discovery, communication, and data debugging in a series of case studies.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"devin@sci.utah.edu","is_corresponding":true,"name":"Devin Lange"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"robert.judson-torres@hci.utah.edu","is_corresponding":false,"name":"Robert L Judson-Torres"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"tzangle@chemeng.utah.edu","is_corresponding":false,"name":"Thomas A Zangle"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"alex@sci.utah.edu","is_corresponding":false,"name":"Alexander Lex"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Devin Lange"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1232","time_end":"","time_stamp":"","time_start":"","title":"Aardvark: Composite Visualizations of Trees, Time-Series, and Images","uid":"v-full-1232","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1251":{"abstract":"Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks that lead to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook history, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only improves the reproducibility of notebooks, but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate utility and potential impact of our approach in two use cases and feedback from notebook users from a range of backgrounds.","accessible_pdf":false,"authors":[{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"klaus@eckelt.info","is_corresponding":true,"name":"Klaus Eckelt"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"kirangadhave2@gmail.com","is_corresponding":false,"name":"Kiran Gadhave"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"alex@sci.utah.edu","is_corresponding":false,"name":"Alexander Lex"},{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"marc.streit@jku.at","is_corresponding":false,"name":"Marc Streit"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Klaus Eckelt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1251","time_end":"","time_stamp":"","time_start":"","title":"Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks","uid":"v-full-1251","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1256":{"abstract":"People commonly utilize visualizations not only to examine a given dataset, but also to draw generalizable conclusions about the underlying models or phenomena. Previous research has compared human visual inference to that of an optimal Bayesian agent, with deviations from rational analysis viewed as problematic. However, human reliance on non-normative heuristics may prove advantageous in certain circumstances. We investigate scenarios where human intuition might surpass idealized statistical rationality. In two experiments, we examine individuals' accuracy in characterizing the parameters of known data-generating models from bivariate visualizations. Our findings indicate that, although participants generally exhibited lower accuracy compared to statistical models, they frequently outperformed Bayesian agents, particularly when faced with extreme samples. Participants appeared to rely on their internal models to filter out noisy visualizations, thus improving their resilience against spurious data. However, participants displayed overconfidence and struggled with uncertainty estimation. They also exhibited higher variance than statistical machines. Our findings suggest that analyst gut reactions to visualizations may provide an advantage, even when departing from rationality. These results carry implications for designing visual analytics tools, offering new perspectives on how to integrate statistical models and analyst intuition for improved inference and decision-making.","accessible_pdf":false,"authors":[{"affiliations":["Indiana University, Indianapolis, United States"],"email":"rkoonch@iu.edu","is_corresponding":true,"name":"Ratanond Koonchanok"},{"affiliations":["Argonne National Laboratory, Lemont, United States","University of Illinois Chicago, Chicago, United States"],"email":"papka@anl.gov","is_corresponding":false,"name":"Michael E. Papka"},{"affiliations":["Indiana University, Indianapolis, United States"],"email":"redak@iu.edu","is_corresponding":false,"name":"Khairi Reda"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ratanond Koonchanok"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1256","time_end":"","time_stamp":"","time_start":"","title":"Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations","uid":"v-full-1256","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1258":{"abstract":"Providing effective guidance for users has long been an important and challenging task for efficient exploratory visual analytics, especially when selecting variables for visualization in high-dimensional datasets. Correlation is the most widely applied metric for guidance in statistical and analytical tools, however a reliance on correlation may lead users towards false positives when interpreting causal relations in the data. In this work, inspired by prior insights on the benefits of counterfactual visualization in supporting visual causal inference, we propose a novel, simple, and efficient counterfactual guidance method to enhance causal inference performance in guided exploratory analytics based on insights and concerns gathered from expert interviews. Our technique aims to capitalize on the benefits of counterfactual approaches while reducing their complexity for users. We integrated counterfactual guidance into an exploratory visual analytics system, and using a synthetically generated ground-truth causal dataset, conducted a comparative user study and evaluated to what extent counterfactual guidance can help lead users to more precise visual causal inferences. The results suggest that counterfactual guidance improved visual causal inference performance, and also led to different exploratory behaviors compared to correlation-based guidance. Based on these findings, we offer future directions to incorporate and examine counterfactual guidance to better support exploratory visual analytics.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":true,"name":"Arran Zeyu Wang"},{"affiliations":["UNC-Chapel Hill, Chapel Hill, United States"],"email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arran Zeyu Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1258","time_end":"","time_stamp":"","time_start":"","title":"Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis","uid":"v-full-1258","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1272":{"abstract":"In various scientific and industrial domains, analyzing multivariate spatial data, i.e., vectors associated with spatial locations, is common practice. To analyze those datasets, analysts may turn to models such as Spatial Blind Source Separation (SBSS). Designed explicitly for spatial data analysis, SBSS finds latent components in the dataset and is superior to popular non-spatial models, like PCA. However, when analysts try different tuning parameter settings, the amount of latent components complicates analytical tasks. Based on our years-long collaboration with SBSS researchers, we propose a visualization approach to tackle this challenge. The main component is UnDRground Tubes (UT), a general-purpose idiom combining ideas from set visualization and multidimensional projections. We describe the UT visualization pipeline and integrate UT into an interactive multiple-view system. We demonstrate its effectiveness through interviews with SBSS experts, a qualitative evaluation with visualization experts, and computational experiments. SBSS experts were excited about our approach. They saw many benefits for their work and potential applications for geostatistical data analysis more generally. UT was also very well received by visualization experts. Our benchmarks show that UT projections and its heuristics are appropriate.","accessible_pdf":false,"authors":[{"affiliations":["TU Wien, Vienna, Austria"],"email":"nikolaus.piccolotto@tuwien.ac.at","is_corresponding":true,"name":"Nikolaus Piccolotto"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"mwallinger@ac.tuwien.ac.at","is_corresponding":false,"name":"Markus Wallinger"},{"affiliations":["Institute of Visual Computing and Human-Centered Technology, Vienna, Austria"],"email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"markus.boegl@tuwien.ac.at","is_corresponding":false,"name":"Markus B\u00f6gl"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nikolaus Piccolotto"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1272","time_end":"","time_stamp":"","time_start":"","title":"UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization","uid":"v-full-1272","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1275":{"abstract":"We developed and validated an instrument to measure the perceived readability in data visualization: PREVis. Researchers and practitioners can easily use this instrument as part of their evaluations to compare the perceived readability of different visual data representations. Our instrument can complement results from controlled experiments on user task performance or provide additional data during in-depth qualitative work such as design iterations when developing a new technique. Although readability is recognized as an essential quality of data visualizations, so far there has not been a unified definition of the construct in the context of visual representations. As a result, researchers often lack guidance for determining how to ask people to rate their perceived readability of a visualization. To address this issue, we engaged in a rigorous process to develop the first validated instrument targeted at the subjective readability of visual data representations. Our final instrument consists of 11 items across 4 dimensions: understandability, layout clarity, readability of data values, and readability of data patterns. We provide the questionnaire as a document with implementation guidelines on osf.io/9cg8j. Beyond this instrument, we contribute a discussion of how researchers have previously assessed visualization readability, and an analysis of the factors underlying perceived readability in visual data representations.","accessible_pdf":false,"authors":[{"affiliations":["LISN, Universit\u00e9 Paris Saclay, CNRS, Orsay, France","Aviz, Inria, Saclay, France"],"email":"acabouat@gmail.com","is_corresponding":true,"name":"Anne-Flore Cabouat"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tingying.he@inria.fr","is_corresponding":false,"name":"Tingying He"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anne-Flore Cabouat"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1275","time_end":"","time_stamp":"","time_start":"","title":"PREVis: Perceived Readability Evaluation for Visualizations","uid":"v-full-1275","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1277":{"abstract":"This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.","accessible_pdf":false,"authors":[{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":true,"name":"Tushar M. Athawale"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"wangz@ornl.gov","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kmorel@acm.org","is_corresponding":false,"name":"Kenneth Moreland"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"gongq@ornl.gov","is_corresponding":false,"name":"Qian Gong"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"klasky@ornl.gov","is_corresponding":false,"name":"Scott Klasky"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"paul.rosen@utah.edu","is_corresponding":false,"name":"Paul Rosen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tushar M. Athawale"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1277","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models","uid":"v-full-1277","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1281":{"abstract":"Participatory budgeting (PB) is a democratic approach to allocating municipal spending that has been adopted in many places in recent years, including in Chicago. Current PB voting resembles a ballot where residents are asked which municipal projects, such as school improvements and road repairs, to fund with a limited budget. In this work, we ask how interactive visualization can benefit PB by conducting a design probe-based interview study (N=13) with policy workers and academics with expertise in PB, urban planning, and civic HCI. Our probe explores how graphical elicitation of voter preferences and a dashboard of voting statistics can be incorporated into a realistic PB tool. Through qualitative analysis, we find that visualization creates opportunities for city government to set expectations about budget constraints while also granting their constituents greater freedom to articulate a wider range of preferences. However, using visualization to provide transparency about PB requires efforts to mitigate potential access barriers and mistrust. We call for more visualization professionals to help build civic capacity by working in and studying political systems.","accessible_pdf":false,"authors":[{"affiliations":["University of Chicago, Chicago, United States"],"email":"kalea@uchicago.edu","is_corresponding":true,"name":"Alex Kale"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"danni6@uchicago.edu","is_corresponding":false,"name":"Danni Liu"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"mariagabrielaa@uchicago.edu","is_corresponding":false,"name":"Maria Gabriela Ayala"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"hwschwab@uchicago.edu","is_corresponding":false,"name":"Harper Schwab"},{"affiliations":["University of Washington, Seattle, United States","University of Utah, Salt Lake City, United States"],"email":"mcnutt.andrew@gmail.com","is_corresponding":false,"name":"Andrew M McNutt"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alex Kale"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1281","time_end":"","time_stamp":"","time_start":"","title":"What Can Interactive Visualization do for Participatory Budgeting in Chicago?","uid":"v-full-1281","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1288":{"abstract":"Data tables are one of the most common ways in which people encounter data. Although mostly built with text and numbers, data tables have a spatial layout and often exhibit visual elements meant to facilitate their reading. Surprisingly, there is an empirical knowledge gap on how people read and use tables and how different visual aids affect people's ability to use them. In this work, we seek to address this vacuum through a controlled study. We asked participants to repeatedly perform four different tasks with tables in four table representation conditions (plain tables, tables with zebra striping, tables with cell background color encoding cell value, and tables with background bar length in a cell encoding cell value). We analyzed completion time, error rate, gaze-tracking data, mouse movement and participant preferences. We found that visual encodings help for finding maximum values (especially color), but not as much as zebra striping helps in a complex task (comparison of proportional differences). We also characterize typical human behavior for the different tasks. These findings can inform the design of tables and research directions for improving presentation of data in tabular form.","accessible_pdf":false,"authors":[{"affiliations":["University of Victoria, Victoria, Canada"],"email":"yongfengji@uvic.ca","is_corresponding":false,"name":"YongFeng Ji"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"nacenta@gmail.com","is_corresponding":false,"name":"Miguel A Nacenta"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Charles Perin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1288","time_end":"","time_stamp":"","time_start":"","title":"The Effect of Visual Aids on Reading Numeric Data Tables","uid":"v-full-1288","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1290":{"abstract":"Visualization linters are end-user facing evaluators that automatically identify potential chart issues. These spell-checker like systems offer a blend of interpretability and customization that is not found in other forms of automated assistance. However, existing linters do not model context and have primarily targeted users who do not need assistance, resulting in obvious---even annoying---advice. We investigate these issues within the domain of color palette design, which serves as a microcosm of visualization design concerns. We contribute a GUI-based color palette linter as a design probe that covers perception, accessibility, context, and other design criteria, and use it to explore visual explanations, integrated fixes, and user-defined linting rules. Through a formative interview study and theory-driven analysis, we find that linters can be meaningfully integrated into graphical contexts thereby addressing many of their core issues. We discuss implications for integrating linters into visualization tools, developing improved assertion languages, and supporting end-user tunable advice---all laying the groundwork for more effective visualization linters in any context.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States","University of Utah, Salt Lake City, United States"],"email":"mcnutt.andrew@gmail.com","is_corresponding":true,"name":"Andrew M McNutt"},{"affiliations":["University of Washington, Seattle, United States"],"email":"maureen.stone@gmail.com","is_corresponding":false,"name":"Maureen Stone"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Andrew M McNutt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1290","time_end":"","time_stamp":"","time_start":"","title":"Mixing Linters with GUIs: A Color Palette Design Probe","uid":"v-full-1290","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1291":{"abstract":"Emotion is an important factor to consider when designing visualizations as it can impact the amount of trust viewers place in a visualization, how well they can retrieve information and understand the underlying data, and how much they engage with or connect to a visualization. We conducted five crowdsourced experiments to quantify the effects of color, chart type, data trend, data variability and data density on emotion (measured through self-reported arousal and valence). Results from our experiments show that there are multiple design elements which influence the emotion induced by a visualization and, more surprisingly, that certain data characteristics influence the emotion of viewers even when the data has no meaning. In light of these findings, we offer guidelines on how to use color, scale, and chart type to counterbalance and emphasize the emotional impact of immutable data characteristics.","accessible_pdf":false,"authors":[{"affiliations":["University of Waterloo, Waterloo, Canada","University of Victoria, Victoria, Canada"],"email":"cartergblair@gmail.com","is_corresponding":false,"name":"Carter Blair"},{"affiliations":["University of Victoria, Victoira, Canada","Delft University of Technology, Delft, Netherlands"],"email":"xiyao.wang23@gmail.com","is_corresponding":false,"name":"Xiyao Wang"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Charles Perin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1291","time_end":"","time_stamp":"","time_start":"","title":"Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations","uid":"v-full-1291","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1295":{"abstract":"Annotations play a vital role in highlighting critical aspects of visualizations, aiding in data externalization and exploration, collaborative data analysis, and visual storytelling. However, despite their widespread use, we identified a lack of a design space for common practices for annotations. In this paper, we evaluated over 1,800 static annotated charts to understand how people annotate visualizations in practice. Through qualitative coding of these diverse real-world annotated charts, we explore three primary aspects of annotation usage patterns: analytic purposes for chart annotations (e.g., present, identify, summarize, or compare data features), mechanisms for chart annotations (e.g., types and combinations of annotations used, frequency of different annotation types across chart types, etc.), and the data source used to generate the annotations. We then synthesized our findings into a design space of annotations, highlighting key design choices for chart annotations. We presented three case studies illustrating our design space as a practical framework for chart annotations to enhance the communication of visualization insights.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States","University of Utah, Salt Lake City, United States"],"email":"dilshadur@sci.utah.edu","is_corresponding":true,"name":"Md Dilshadur Rahman"},{"affiliations":["University of Oklahoma, Norman, United States","University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":false,"name":"Ghulam Jilani Quadri"},{"affiliations":["University of South Florida , Tampa, United States","University of South Florida , Tampa, United States"],"email":"bdoppalapudi@usf.edu","is_corresponding":false,"name":"Bhavana Doppalapudi"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States","University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"},{"affiliations":["University of Utah, Salt Lake City, United States","University of Utah, Salt Lake City, United States"],"email":"paul.rosen@utah.edu","is_corresponding":false,"name":"Paul Rosen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Md Dilshadur Rahman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1295","time_end":"","time_stamp":"","time_start":"","title":"A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space","uid":"v-full-1295","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1302":{"abstract":"We present the results of an exploratory study on how pairs interact with speech commands and touch gestures on a wall-sized display during a collaborative sensemaking task. Previous work has shown that speech commands, alone or in combination with other input modalities, can support visual data exploration by individuals. However, it is still unknown whether and how speech commands can be used in collaboration, and for what tasks. To answer these questions, we developed a functioning prototype that we used as a technology probe. We conducted an in-depth exploratory study with 20 participants (10 pairs) to analyze their interaction choices, the interplay between the input modalities, and their collaboration. While touch was the most used modality, we found that participants preferred speech commands for global operations, used them for distant interaction, and that speech interaction contributed to the awareness of the partner\u2019s actions. Furthermore, the likelihood of using speech commands during collaboration was related to the personality trait of agreeableness. Regarding collaboration styles, participants interacted with speech equally often whether they were in loosely or closely coupled collaboration. While the partners stood closer to each other during close collaboration, they did not walk away from their partner to use speech commands. From our findings, we derive and contribute a set of design considerations for collaborative and multimodal interactive data analysis systems.","accessible_pdf":false,"authors":[{"affiliations":["University of Bremen, Bremen, Germany","University of Bremen, Bremen, Germany"],"email":"molina@uni-bremen.de","is_corresponding":true,"name":"Gabriela Molina Le\u00f3n"},{"affiliations":["LISN, Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"anastasia.bezerianos@universite-paris-saclay.fr","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":["Inria, Palaiseau, France"],"email":"olivier.gladin@inria.fr","is_corresponding":false,"name":"Olivier Gladin"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gabriela Molina Le\u00f3n"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1302","time_end":"","time_stamp":"","time_start":"","title":"Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics","uid":"v-full-1302","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1307":{"abstract":"Building information modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, building energy modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building\u2019s energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identification, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and understanding throughout the conversion process. By evaluating user feedback, we could show that BEMTrace can solve domain-specific tasks.","accessible_pdf":false,"authors":[{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"walch@vrvis.at","is_corresponding":false,"name":"Andreas Walch"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"szabo@vrvis.at","is_corresponding":false,"name":"Attila Szabo"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"hs@vrvis.at","is_corresponding":false,"name":"Harald Steinlechner"},{"affiliations":["Independent Researcher, Vienna, Austria"],"email":"thomas@ortner.fyi","is_corresponding":false,"name":"Thomas Ortner"},{"affiliations":["Institute of Visual Computing "," Human-Centered Technology, Vienna, Austria"],"email":"groeller@cg.tuwien.ac.at","is_corresponding":false,"name":"Eduard Gr\u00f6ller"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"johanna.schmidt@vrvis.at","is_corresponding":true,"name":"Johanna Schmidt"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Johanna Schmidt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1307","time_end":"","time_stamp":"","time_start":"","title":"BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM","uid":"v-full-1307","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1309":{"abstract":"Visualizations play a critical role in validating and improving statistical models. However, the design space of model check visualizations is not well understood, making it difficult for authors to explore and specify effective graphical model checks. VMC defines a model check visualization using four components: (1) samples of distributions of checkable quantities generated from the model, including predictive distributions for new data and distributions of model parameters; (2) transformations on observed data to facilitate comparison; (3) visual representations of distributions; and (4) layouts to facilitate comparing model samples and observed data. We contribute an implementation of VMC as an R package. We validate VMC by reproducing a set of canonical model check examples, and show how using VMC to generate model checks reduces the edit distance between visualizations relative to existing visualization toolkits. The findings of an interview study with three expert modelers who used VMC highlight challenges and opportunities for encouraging exploration of correct, effective model check visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"ziyangguo1030@gmail.com","is_corresponding":true,"name":"Ziyang Guo"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"kalea@uchicago.edu","is_corresponding":false,"name":"Alex Kale"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"jhullman@northwestern.edu","is_corresponding":false,"name":"Jessica Hullman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ziyang Guo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1309","time_end":"","time_stamp":"","time_start":"","title":"VMC: A Grammar for Visualizing Statistical Model Checks","uid":"v-full-1309","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1316":{"abstract":"We apply an approach from cognitive linguistics by mapping Conceptual Metaphor Theory (CMT) to the visualization domain to address patterns of visual conceptual metaphors that are often used in science infographics. Metaphors play an essential part in visual communication and are frequently employed to explain complex concepts. However, their use is often based on intuition, rather than following a formal process. At present, we lack tools and language for understanding and describing metaphor use in visualization to the extent where taxonomy and grammar could guide the creation of visual components, e.g., infographics. Our classification of the visual conceptual mappings within scientific representations is based on the breakdown of visual components in existing scientific infographics. We demonstrate the development of this mapping through a detailed analysis of data collected from four domains (biomedicine, climate, space, and anthropology) that represent a diverse range of visual conceptual metaphors used in the visual communication of science. This work allows us to identify patterns of visual conceptual metaphor use within the domains, resolve ambiguities about why specific conceptual metaphors are used, and develop a better overall understanding of visual metaphor use in scientific infographics. Our analysis shows that ontological and orientational conceptual metaphors are the most widely applied to translate complex scientific concepts. To support our findings we developed a visual exploratory tool based on the collected database that places the individual infographics on a spatio-temporal scale and illustrates the breakdown of visual conceptual metaphors.","accessible_pdf":false,"authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"hana.pokojna@gmail.com","is_corresponding":true,"name":"Hana Pokojn\u00e1"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["University of Rostock, Rostock, Germany"],"email":"stefan.bruckner@gmail.com","is_corresponding":false,"name":"Stefan Bruckner"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"},{"affiliations":["University of Bergen, Bergen, Norway","Haukeland University Hospital, University of Bergen, Bergen, Norway"],"email":"laura.garrison@uib.no","is_corresponding":false,"name":"Laura Garrison"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hana Pokojn\u00e1"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1316","time_end":"","time_stamp":"","time_start":"","title":"The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling","uid":"v-full-1316","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1318":{"abstract":"In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models' analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments--from initial exploration to detailed analysis--we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates their applicability in addressing the pressing concern of misleading charts.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"yhload@cse.ust.hk","is_corresponding":true,"name":"Leo Yu-Ho Lo"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leo Yu-Ho Lo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1318","time_end":"","time_stamp":"","time_start":"","title":"How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations","uid":"v-full-1318","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1325":{"abstract":"Dynamic data visualizations can convey large amounts of information over time, such as using motion to depict changes in data values for multiple entities. Such dynamic displays put a demand on our visual processing capacities, yet our perception of motion is limited. When tracking multiple objects across space and time, humans can typically track up to four objects, and the capacity is even lower if we also need to remember the history of the objects\u2019 features. Several techniques have been shown to improve the processing of dynamic displays. Staging the animation to sequentially show steps in a transition and tracing object movement by displaying trajectory histories can increase processing by reducing the cognitive load. In this paper, We examine the effectiveness of staging and tracing in dynamic displays. We showed participants animated line charts depicting the movements of lines and asked them to identify the line with the highest mean and variance. We manipulated the animation to display the lines with or without staging, tracing and history, and compared the results to a static chart as a control. Results showed that tracing and staging are preferred by participants, and improve their performance in mean and variance tasks respectively. The preferred display time 3 times shorter when staging is used. Also, encoding animation speed with mean and variance in congruent tasks is associated with higher accuracy. These findings help inform real-world best practices for building dynamic displays that leverage the strength of humans' visual processing.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"shu343@gatech.edu","is_corresponding":true,"name":"Songwen Hu"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"ouxunjiang@u.northwestern.edu","is_corresponding":false,"name":"Ouxun Jiang"},{"affiliations":["Dolby Laboratories Inc., San Francisco, United States"],"email":"jcr@dolby.com","is_corresponding":false,"name":"Jeffrey Riedmiller"},{"affiliations":["Georgia Tech, Atlanta, United States","University of Massachusetts Amherst, Amherst, United States"],"email":"cxiong@gatech.edu","is_corresponding":false,"name":"Cindy Xiong Bearfield"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Songwen Hu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1325","time_end":"","time_stamp":"","time_start":"","title":"Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series","uid":"v-full-1325","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1326":{"abstract":"Evaluating the quality of text responses generated by large language models (LLMs) poses unique challenges compared to traditional machine learning. While automatic side-by-side evaluation has emerged as a promising approach, LLM developers face scalability and interpretability challenges in analyzing these evaluation results. In this paper, we present LLM Comparator, a novel visual analytics tool for interactively analyzing results from side-by-side evaluation of LLMs. The tool provides users with interactive workflows to understand when and why a model performs better or worse than a baseline model, and how the responses from two models differ qualitatively. We iteratively designed and developed the tool by closely working with researchers and engineers at a large technology company. Qualitative feedback from users highlights that the tool facilitates in-depth analysis of individual examples while enabling users to visually overview and flexibly slice data. This empowers users to identify undesirable patterns, formulate hypotheses about model behavior, and gain insights for model improvement.","accessible_pdf":false,"authors":[{"affiliations":["Google, Atlanta, United States"],"email":"minsuk.kahng@gmail.com","is_corresponding":true,"name":"Minsuk Kahng"},{"affiliations":["Google Research, Seattle, United States"],"email":"iftenney@google.com","is_corresponding":false,"name":"Ian Tenney"},{"affiliations":["Google Research, Cambridge, United States"],"email":"mahimap@google.com","is_corresponding":false,"name":"Mahima Pushkarna"},{"affiliations":["Google Research, Pittsburgh, United States"],"email":"lxieyang.cmu@gmail.com","is_corresponding":false,"name":"Michael Xieyang Liu"},{"affiliations":["Google Research, Cambridge, United States"],"email":"jwexler@google.com","is_corresponding":false,"name":"James Wexler"},{"affiliations":["Google, Cambridge, United States"],"email":"ereif@google.com","is_corresponding":false,"name":"Emily Reif"},{"affiliations":["Google Research, Mountain View, United States"],"email":"kallarackal@google.com","is_corresponding":false,"name":"Krystal Kallarackal"},{"affiliations":["Google Research, Seattle, United States"],"email":"minsuk.cs@gmail.com","is_corresponding":false,"name":"Minsuk Chang"},{"affiliations":["Google, Cambridge, United States"],"email":"michaelterry@google.com","is_corresponding":false,"name":"Michael Terry"},{"affiliations":["Google, Paris, France"],"email":"ldixon@google.com","is_corresponding":false,"name":"Lucas Dixon"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Minsuk Kahng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1326","time_end":"","time_stamp":"","time_start":"","title":"LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models","uid":"v-full-1326","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1329":{"abstract":"The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students' learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students' interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master's level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students' interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT's responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system's effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz's capacity to enhance educators' insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"zchendf@connect.ust.hk","is_corresponding":true,"name":"Zixin Chen"},{"affiliations":["The Hong Kong University of Science and Technology, Sai Kung, China"],"email":"csejiachenw@ust.hk","is_corresponding":false,"name":"Jiachen Wang"},{"affiliations":["Texas A","M University, College Station, United States"],"email":"xiameng9355@gmail.com","is_corresponding":false,"name":"Meng Xia"},{"affiliations":["The Hong Kong University of Science and Technology, Kowloon, Hong Kong"],"email":"kshigyo@connect.ust.hk","is_corresponding":false,"name":"Kento Shigyo"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"dliuak@connect.ust.hk","is_corresponding":false,"name":"Dingdong Liu"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"rzhangab@connect.ust.hk","is_corresponding":false,"name":"Rong Zhang"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zixin Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1329","time_end":"","time_stamp":"","time_start":"","title":"StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions","uid":"v-full-1329","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1332":{"abstract":"Translating natural language to visualization (NL2VIS) has shown great promise for visual data analysis, but it remains a challenging task that requires multiple low-level implementations, such as natural language processing and visualization design. Recent advancements in pre-trained large language models (LLMs) are opening new avenues for generating visualizations from natural language. However, the lack of a comprehensive and reliable benchmark hinders our understanding of LLMs\u2019 capabilities in visualization generation. In this paper, we address this gap by proposing a new NL2VIS benchmark called VisEval. Firstly, we introduce a high-quality and large-scale dataset. This dataset includes 2,524 representative queries covering 146 databases, paired with accurately labeled ground truths. Secondly, we advocate for a comprehensive automated evaluation methodology covering multiple dimensions, including validity, legality, and readability. By systematically scanning for potential issues with a number of heterogeneous checkers, VisEval provides reliable and trustworthy evaluation outcomes. We run VisEval on a series of state-of-the-art LLMs. Our evaluation reveals prevalent challenges and delivers essential insights for future advancements.","accessible_pdf":false,"authors":[{"affiliations":["Microsoft Research, Shanghai, China"],"email":"christy05.chen@gmail.com","is_corresponding":true,"name":"Nan Chen"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"scottyugochang@gmail.com","is_corresponding":false,"name":"Yuge Zhang"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"jiahangxu@microsoft.com","is_corresponding":false,"name":"Jiahang Xu"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"rk.ren@outlook.com","is_corresponding":false,"name":"Kan Ren"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"yuqyang@microsoft.com","is_corresponding":false,"name":"Yuqing Yang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nan Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1332","time_end":"","time_stamp":"","time_start":"","title":"VisEval: A Benchmark for Data Visualization in the Era of Large Language Models","uid":"v-full-1332","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1333":{"abstract":"Data videos increasingly becoming a popular data storytelling form represented by visual and audio integration. In recent years, more and more researchers have explored many narrative structures for effective and attractive data storytelling. Meanwhile, the Hero's Journey provides a classic narrative framework specific to the Hero's story that has been adopted by various mediums. There are continuous discussions about applying Hero's Journey to data stories. However, so far, little systematic and practical guidance on how to create a data video for a specific story type like the Hero's Journey, as well as how to manipulate its sound and visual designs simultaneously. To fulfill this gap, we first identified 48 data videos aligned with the Hero's Journey as the common storytelling from 109 high-quality data videos. Then, we examined how existing practices apply Hero's Journey for creating data videos. We coded the 48 data videos in terms of the narrative stages, sound design, and visual design according to the Hero's Journey structure. Based on our findings, we proposed a design space to provide practical guidance on the narrative, visual, and sound custom design for different narrative segments of the hero's journey (i.e., Departure, Initiation, Return) through data video creation. To validate our proposed design space, we conducted a user study where 20 participants were invited to design data videos with and without our design space guidance, which was evaluated by two experts. Results show that our design space provides useful and practical guidance for data storytellers effectively creating data videos with the Hero's Journey.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Guangzhou, China"],"email":"zwei302@connect.hkust-gz.edu.cn","is_corresponding":true,"name":"Zheng Wei"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"xxubq@connect.ust.hk","is_corresponding":false,"name":"Xian Xu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zheng Wei"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1333","time_end":"","time_stamp":"","time_start":"","title":"Telling Data Stories with the Hero\u2019s Journey: Design Guidance for Creating Data Videos","uid":"v-full-1333","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1342":{"abstract":"Genomics experts rely on visualization to extract and share insights from complex and large-scale datasets. Beyond off-the-shelf tools for data exploration, there is an increasing need for platforms that aid experts in authoring customized visualizations for both exploration and communication of insights. A variety of interactive techniques have been proposed for authoring data visualizations, such as template editing, shelf configuration, natural language input, and code editors. However, it remains unclear how genomics experts create visualizations and which techniques best support their visualization tasks and needs. To address this gap, we conducted two user studies with genomics researchers: (1) semi-structured interviews (n=20) to identify the tasks, user contexts, and current visualization authoring techniques and (2) an exploratory study (n=13) using visual probes to elicit users\u2019 intents and desired techniques when creating visualizations. Our contributions include (1) a characterization of how visualization authoring is currently utilized in genomics visualization, identifying limitations and benefits in light of common criteria for authoring tools, and (2) generalizable and actionable design implications for genomics visualization authoring tools based on our findings on task- and user-specific usefulness of authoring techniques.","accessible_pdf":false,"authors":[{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.v.d.brandt@tue.nl","is_corresponding":true,"name":"Astrid van den Brandt"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"sehi_lyi@hms.harvard.edu","is_corresponding":false,"name":"Sehi L'Yi"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"huyen_nguyen@hms.harvard.edu","is_corresponding":false,"name":"Huyen N. Nguyen"},{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.vilanova@tue.nl","is_corresponding":false,"name":"Anna Vilanova"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Astrid van den Brandt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1342","time_end":"","time_stamp":"","time_start":"","title":"Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks","uid":"v-full-1342","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1351":{"abstract":"As basketball\u2019s popularity surges, fans often find themselves confused and overwhelmed by the rapid game pace and complexity. Basketball tactics, involving a complex series of actions, require substantial knowledge to be fully understood. This complexity leads to a need for additional information and explanation, which can distract fans from the game. To tackle these challenges, we present Sportify, a Visual Question Answering system that integrates narratives and embedded visualization for demystifying basketball tactical questions, aiding fans in understanding various game aspects. We propose three novel action visualizations (i.e., Pass, Cut, and Screen) to demonstrate critical action sequences. To explain the reasoning and logic behind players\u2019 actions, we leverage a large-language model (LLM) to generate narratives. We adopt a storytelling approach for complex scenarios from both first and third-person perspectives, integrating action visualizations. We evaluated Sportify with basketball fans to investigate its impact on understanding of tactics, and how different personal perspectives of narratives impact the understanding of complex tactic with action visualizations. Our evaluation with basketball fans demonstrates Sportify\u2019s capability to deepen tactical insights and amplify the viewing experience. Furthermore, third-person narration assists people in getting in-depth game explanations while first-person narration enhances fans\u2019 game engagement.","accessible_pdf":false,"authors":[{"affiliations":["Harvard University, Allston, United States"],"email":"chungyi347@gmail.com","is_corresponding":true,"name":"Chunggi Lee"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"mlin@g.harvard.edu","is_corresponding":false,"name":"Tica Lin"},{"affiliations":["University of Minnesota-Twin Cities, Minneapolis, United States"],"email":"ztchen@umn.edu","is_corresponding":false,"name":"Chen Zhu-Tian"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"pfister@seas.harvard.edu","is_corresponding":false,"name":"Hanspeter Pfister"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chunggi Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1351","time_end":"","time_stamp":"","time_start":"","title":"Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video","uid":"v-full-1351","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1363":{"abstract":"Data visualization aids in making data analysis more intuitive and in-depth, with widespread applications in fields such as biology, finance, and medicine. For massive and continuously growing streaming time series data, these data are typically visualized in the form of line charts, but the data transmission puts significant pressure on the network, leading to visualization lag or even fail to render completely. This paper proposes a universal sampling algorithm FPCS, which retains feature points from continuously received streaming time series data, compensates for the frequent fluctuating feature points, and aims to achieve efficient visualization. This algorithm bridges the gap in sampling for streaming time series data. The algorithm has several advantages: (1) It optimizes the sampling results by compensating for fewer feature points, retaining the visualization features of the original data very well, ensuring high-quality sampled data; (2) The execution time is the shortest compared to similar existing algorithms; (3) It has an almost negligible space overhead; (4) The data sampling process does not depend on the overall data; (5) This algorithm can be applied to infinite streaming data and finite static data.","accessible_pdf":false,"authors":[{"affiliations":["China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China"],"email":"3271961659@qq.com","is_corresponding":true,"name":"Hongyan Li"},{"affiliations":["China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China"],"email":"ustcboy@outlook.com","is_corresponding":false,"name":"Bo Yang"},{"affiliations":["China Nanhu Academy of Electronics and Information Technology, Jiaxing, China"],"email":"caiyansong@cnaeit.com","is_corresponding":false,"name":"Yansong Chua"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hongyan Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1363","time_end":"","time_stamp":"","time_start":"","title":"FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data","uid":"v-full-1363","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1368":{"abstract":"Synthetic Lethal (SL) relationships, although rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there remains a persistent need among domain experts for interpretive paths and mechanism explorations that better harmonize with domain-specific knowledge, particularly due to the significant costs involved in experimentation. To address this gap, we propose an iterative Human-AI collaborative framework comprising two key components: 1)Human-Engaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2)Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids domain experts in organizing and comparing prediction results and interpretive paths across different granularities, thereby uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, thereby enhancing expert involvement and intervention to build trust. This framework, facilitated by SLInterpreter, ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. Subsequently, we evaluate the efficacy of the framework through a case study and expert interviews.","accessible_pdf":false,"authors":[{"affiliations":["Shanghaitech University, Shanghai, China"],"email":"jianghr2023@shanghaitech.edu.cn","is_corresponding":true,"name":"Haoran Jiang"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"shishh2023@shanghaitech.edu.cn","is_corresponding":false,"name":"Shaohan Shi"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"zhangshh2@shanghaitech.edu.cn","is_corresponding":false,"name":"Shuhao Zhang"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"zhengjie@shanghaitech.edu.cn","is_corresponding":false,"name":"Jie Zheng"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haoran Jiang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1368","time_end":"","time_stamp":"","time_start":"","title":"SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction","uid":"v-full-1368","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1391":{"abstract":"In volume visualization, visualization synthesis has attracted much attention due to its ability to generate novel visualizations without following the conventional rendering pipeline. However, existing solutions based on generative adversarial networks often require many training images and take significant training time. Still, issues such as low quality, consistency, and flexibility persist. This paper introduces StyleRF-VolVis, an innovative style transfer framework for expressive volume visualization (VolVis) via neural radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its ability to accurately separate the underlying scene geometry (i.e., content) and color appearance (i.e., style), conveniently modify color, opacity, and lighting of the original rendering while maintaining visual content consistency across the views, and effectively transfer arbitrary styles from reference images to the reconstructed 3D scene. To achieve these, we design a base NeRF model for scene geometry extraction, a palette color network to classify regions of the radiance field for photorealistic editing, and an unrestricted color network to lift the color palette constraint via knowledge distillation for non-photorealistic editing. We demonstrate the superior quality, consistency, and flexibility of StyleRF-VolVis by experimenting with various volume rendering scenes and reference images and comparing StyleRF-VolVis against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF and SNeRF) style rendering solutions.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"ktang2@nd.edu","is_corresponding":true,"name":"Kaiyuan Tang"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kaiyuan Tang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1391","time_end":"","time_stamp":"","time_start":"","title":"StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization","uid":"v-full-1391","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1393":{"abstract":"This paper discusses challenges and design strategies in responsive design for thematic maps in information visualization. Thematic maps pose a number of unique challenges for responsiveness, such as inflexible aspect ratios that do not easily adapt to varying screen dimensions, or densely clustered visual elements in urban areas becoming illegible at smaller scales. However, design guidance on how to best address these issues is currently lacking. We conducted design sessions with eight professional designers and developers of web-based thematic maps for information visualization. Participants were asked to redesign a given map for various screen sizes and aspect ratios and to describe their reasoning for when and how they adapted the design. We report general observations of practitioners\u2019 motivations, decision-making processes, and personal design frameworks. We then derive seven challenges commonly encountered in responsive map design, and 17 strategies to address them, such as repositioning elements, segmenting the map, or using alternative visualizations. We compile these challenges and strategies into an illustrated cheat sheet targeted at anyone designing or learning to design responsive maps. The cheat sheet is available online: https://responsive-vis.github.io/map-cheat-sheet.","accessible_pdf":false,"authors":[{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"sarah.schoettler@ed.ac.uk","is_corresponding":true,"name":"Sarah Sch\u00f6ttler"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"uhinrich@ed.ac.uk","is_corresponding":false,"name":"Uta Hinrichs"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sarah Sch\u00f6ttler"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1393","time_end":"","time_stamp":"","time_start":"","title":"Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts","uid":"v-full-1393","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1394":{"abstract":"This paper presents discursive patinas, a technique to visualize discussions onto data visualizations, inspired by how people leave traces in the physical world. While data visualizations are widely discussed in online communities and social media, comments tend to be displayed separately from the visualization. We lack ways to relate these discussions to the content of the visualization, e.g., to situate comments, explain visual patterns, or question assumptions. In our visualization annotation interface, users can designate areas within the visualization to, e.g., highlight specific visual marks (anchors), attach textual comments, and add category labels, likes, and replies. By coloring and styling these designated areas, a meta visualization emerges, showing what and where people comment and annotate. These patinas show regions of heavy discussions, recent commenting activity, and the distribution of questions, suggestions, or personal stories. To study how people use anchors to discuss visualizations and understand if and how information in patinas influence people's understanding of the discussion, we ran workshops with 90 participants including students, domain experts, and visualization researchers. Our results show that discursive patinas improve the ability to navigate discussions and guide people to comments that help understand, contextualize, or scrutinize the visualization. We discuss the potential of the technique to support discursive engagements, including critical readings of visualizations, design feedback, and feminist approaches to data visualization.","accessible_pdf":false,"authors":[{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom","Potsdam University of Applied Sciences, Potsdam, Germany"],"email":"tobias.kauer@fh-potsdam.de","is_corresponding":true,"name":"Tobias Kauer"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"derya.akbaba@liu.se","is_corresponding":false,"name":"Derya Akbaba"},{"affiliations":["University of Applied Sciences Potsdam, Potsdam, Germany"],"email":"doerk@fh-potsdam.de","is_corresponding":false,"name":"Marian D\u00f6rk"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias Kauer"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1394","time_end":"","time_stamp":"","time_start":"","title":"Discursive Patinas: Anchoring Discussions in Data Visualizations","uid":"v-full-1394","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1395":{"abstract":"Onboarding a user to a visualization dashboard entails explaining its various components, including the chart types used, the data loaded, and the interactions provided. Authoring such an onboarding experience is time-consuming and requires significant knowledge, and little guidance exists on how best to do this. End-users being onboarded to a new dashboard can be either confused and overwhelmed, or disinterested and disengaged, depending on the user\u2019s expertise. We propose interactive dashboard tours (d-tours) as semi-automated onboarding experiences for variable user expertise that preserve the user\u2019s agency, interest, and engagement. Our interactive tours concept draws from open-world game design to give the user freedom in choosing their path in the onboarding. We have implemented the concept in a tool called D-TOUR PROTOTYPE that allows authors to craft custom and interactive dashboard tours from scratch or using automatic templates. Automatically generated tours can still be customized to use different media (such as video, audio, or highlighting) or new narratives to produce a tailored onboarding experience for individual users or groups. We demonstrate the usefulness of interactive dashboard tours through use cases and expert interviews. The evaluation shows that the authors find the automation in the DTour prototype helpful and time-saving and the users find it engaging and intuitive. This paper and all supplemental materials are available at \\url{https://osf.io/6fbjp/}.","accessible_pdf":false,"authors":[{"affiliations":["Pro2Future GmbH, Linz, Austria","Johannes Kepler University, Linz, Austria"],"email":"vaishali.dhanoa@pro2future.at","is_corresponding":true,"name":"Vaishali Dhanoa"},{"affiliations":["Johannes Kepler University, Linz, Austria"],"email":"andreas.hinterreiter@jku.at","is_corresponding":false,"name":"Andreas Hinterreiter"},{"affiliations":["Johannes Kepler University, Linz, Austria"],"email":"vanessa.fediuk@jku.at","is_corresponding":false,"name":"Vanessa Fediuk"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"},{"affiliations":["Institute of Visual Computing "," Human-Centered Technology, Vienna, Austria"],"email":"groeller@cg.tuwien.ac.at","is_corresponding":false,"name":"Eduard Gr\u00f6ller"},{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"marc.streit@jku.at","is_corresponding":false,"name":"Marc Streit"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Vaishali Dhanoa"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1395","time_end":"","time_stamp":"","time_start":"","title":"D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding","uid":"v-full-1395","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1414":{"abstract":"Visualization designers often rely on examples to explore the space of possible designs, yet we have little insight into how examples shape data visualization design outcomes. While the effects of examples have been studied in other disciplines, such as web design or engineering, the results are not readily applicable to visualization design due to inconsistencies in findings and challenges unique to visualization design. Towards bridging this gap, we conduct an exploratory experiment involving 32 data visualization designers focusing on the influence of five factors (timing, quantity, diversity, data topic similarity, and data schema similarity) on objectively measurable design outcomes (e.g., numbers of designs and idea transfers). Our quantitative analysis shows that when examples are introduced after initial brainstorming, designers curate examples with topics less similar to the dataset they are working on and produce more designs with a high variation in visualization components. Also, designers copy more ideas from examples with higher data schema similarities. Our qualitative analysis of participants\u2019 thought processes provides insights into why designers incorporate examples into their designs, revealing potential factors that have not been previously investigated. Finally, we discuss how our results inform future work on quantifying designs, improving measures of effectiveness, and supporting example-based visualization design. All supplementary materials are available at https://osf.io/sbp2k/?view_only=ca14af497f5845a0b1b2c616699fefc5","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"hbako@umd.edu","is_corresponding":true,"name":"Hannah K. Bako"},{"affiliations":["The University of Texas at Austin, Austin, United States"],"email":"xinyi.liu@utexas.edu","is_corresponding":false,"name":"Xinyi Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"gko1@terpmail.umd.edu","is_corresponding":false,"name":"Grace Ko"},{"affiliations":["Human Data Interaction Lab, College Park, United States"],"email":"hsong02@cs.umd.edu","is_corresponding":false,"name":"Hyemi Song"},{"affiliations":["University of Washington, Seattle, United States"],"email":"leibatt@cs.washington.edu","is_corresponding":false,"name":"Leilani Battle"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hannah K. Bako"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1414","time_end":"","time_stamp":"","time_start":"","title":"Unveiling How Examples Shape Data Visualization Design Outcomes","uid":"v-full-1414","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1416":{"abstract":"Various data visualization downstream applications such as reverse engineering and interactive authoring require a vocabulary that describes the structure of visualization scenes and the procedure to manipulate them. A few scene abstractions have been proposed, but they are restricted to specific applications for a limited set of visualization types. A unified and expressive model of data visualization scenes for different downstream applications has been missing. To fill this gap, we present Manipulable Semantic Components (MSC), a computational representation of data visualization scenes, to support applications in scene understanding and augmentation. MSC consists of two parts: a unified object model describing the structure of a visualization scene in terms of semantic components, and a set of operations to generate and modify the scene components. We demonstrate the benefits of MSC in three applications: visualization authoring, visualization deconstruction and reuse, and animation specification.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":true,"name":"Zhicheng Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"cchen24@umd.edu","is_corresponding":false,"name":"Chen Chen"},{"affiliations":["University of Maryland, College Park, United States"],"email":"hookerj100@gmail.com","is_corresponding":false,"name":"John Hooker"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zhicheng Liu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1416","time_end":"","time_stamp":"","time_start":"","title":"Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes","uid":"v-full-1416","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1422":{"abstract":"Visualization items\u2014factual questions about visualizations that ask viewers to accomplish visualization tasks\u2014are regularly used in the field of information visualization as educational and evaluative materials. For example, researchers of visualization literacy require large, diverse banks of items to conduct studies where the same skill is measured repeatedly on the same participants. Yet, generating a large number of high-quality, diverse items requires significant time and expertise. To address the critical need for a large number of diverse visualization items in education and research, this paper investigates the potential for large language models (LLMs) to automate the generation of multiple-choice visualization items. Through an iterative design process, we develop an LLM-based pipeline, the VILA (Visualization Items Generated by Large LAnguage Models) pipeline, for efficiently generating visualization items that measure people\u2019s ability to accomplish visualization tasks. We use the VILA pipeline to generate 1,404 candidate items across 12 chart types and 13 visualization tasks. In collaboration with 11 visualization experts, we develop an evaluation rulebook which we then use to rate the quality of all candidate items. The result is a final bank, the VILA bank, of \u223c1,100 items. From this evaluation, we also identify and classify current limitations of LLMs in generating visualization items, and discuss the role of human oversight in ensuring quality. In addition, we demonstrate an application of our work by creating a visualization literacy test, VILA-VLAT, which measures people\u2019s ability to complete a diverse set of tasks on various types of visualizations; to show the potential of this application, we assess the convergent validity of VILA-VLAT by comparing it to the existing test VLAT via an online study (R = 0.70). Lastly, we discuss the application areas of the VILA pipeline and the VILA bank and provide practical recommendations for their use. All supplemental materials are available at https://osf.io/ysrhq/?view_only=e31b3ddf216e4351bb37bcedf744e9d6.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"yuancui2025@u.northwestern.edu","is_corresponding":true,"name":"Yuan Cui"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"wanqian.ge@northwestern.edu","is_corresponding":false,"name":"Lily W. Ge"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"yding5@wpi.edu","is_corresponding":false,"name":"Yiren Ding"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ltharrison@wpi.edu","is_corresponding":false,"name":"Lane Harrison"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"fumeng.p.yang@gmail.com","is_corresponding":false,"name":"Fumeng Yang"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuan Cui"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1422","time_end":"","time_stamp":"","time_start":"","title":"Promises and Pitfalls: Using Large Language Models to Generate Visualization Items","uid":"v-full-1422","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1425":{"abstract":"Comics have been shown to be an effective method for sequential data-driven storytelling, especially for dynamic graphs that change over time. However, manually creating a data-driven comic for a dynamic graph is currently time-consuming, complex, and error-prone. In this paper, we propose DG Comics, a novel comic authoring tool for dynamic graphs that allows users to semi-automatically build the comic and annotate it. The tool uses a hierarchical clustering algorithm that we newly developed for segmenting consecutive snapshots of the dynamic graph while preserving their chronological order. It also provides rich information on both individuals and communities extracted from dynamic graphs in multiple views, where users can explore dynamic graphs and choose what to tell in comics. For evaluation, we provide an example and report results from a user study and expert review.","accessible_pdf":false,"authors":[{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"joohee@unist.ac.kr","is_corresponding":true,"name":"Joohee Kim"},{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"gusdnr0916@unist.ac.kr","is_corresponding":false,"name":"Hyunwook Lee"},{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"ducnm@unist.ac.kr","is_corresponding":false,"name":"Duc M. Nguyen"},{"affiliations":["Australian National University, Canberra, Australia"],"email":"minjeong.shin@anu.edu.au","is_corresponding":false,"name":"Minjeong Shin"},{"affiliations":["IBM Research, Cambridge, United States"],"email":"bumchul.kwon@us.ibm.com","is_corresponding":false,"name":"Bum Chul Kwon"},{"affiliations":["UNIST, Ulsan, Korea, Republic of"],"email":"sako@unist.ac.kr","is_corresponding":false,"name":"Sungahn Ko"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Joohee Kim"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1425","time_end":"","time_stamp":"","time_start":"","title":"DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs","uid":"v-full-1425","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1427":{"abstract":"Numerical simulation serves as a cornerstone in scientific modeling, yet the process of fine-tuning simulation parameters poses significant challenges. Conventionally, parameter adjustment relies on extensive numerical simulations, data analysis, and expert insights, resulting in substantial computational costs and low efficiency. The emergence of deep learning in recent years has provided promising avenues for more efficient exploration of parameter spaces. However, existing approaches often lack intuitive methods for precise parameter adjustment and optimization. To tackle these challenges, we introduce ParamsDrag, a model that facilitates parameter space exploration through direct interaction with visualizations. Inspired by DragGAN, our ParamsDrag model operates in three steps. First, the generative component of ParamsDrag generates visualizations based on the input simulation parameters. Second, by directly dragging structure-related features in the visualizations, users can intuitively understand the controlling effect of different parameters. Third, with the understanding from the earlier step, users can steer ParamsDrag to produce dynamic visual outcomes. Through experiments conducted on real-world simulations and comparisons with state-of-the-art deep learning based approaches, we demonstrate the efficacy of our solution.","accessible_pdf":false,"authors":[{"affiliations":["Computer Network Information Center, Chinese Academy of Sciences, Beijing, China","University of Chinese Academy of Sciences, Beijing, China"],"email":"liguan@sccas.cn","is_corresponding":true,"name":"Guan Li"},{"affiliations":["Beijing Forestry University, Beijing, China"],"email":"leo_edumail@163.com","is_corresponding":false,"name":"Yang Liu"},{"affiliations":["Computer Network Information Center, Chinese Academy of Sciences, Beijing, China"],"email":"sgh@sccas.cn","is_corresponding":false,"name":"Guihua Shan"},{"affiliations":["Chinese Academy of Sciences, Beijing, China"],"email":"chengshiyu@cnic.cn","is_corresponding":false,"name":"Shiyu Cheng"},{"affiliations":["Beijing Forestry University, Beijing, China"],"email":"weiqun.cao@126.com","is_corresponding":false,"name":"Weiqun Cao"},{"affiliations":["Visa Research, Palo Alto, United States"],"email":"junpeng.wang.nk@gmail.com","is_corresponding":false,"name":"Junpeng Wang"},{"affiliations":["National Taiwan Normal University, Taipei City, Taiwan"],"email":"caseywang777@gmail.com","is_corresponding":false,"name":"Ko-Chih Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Guan Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1427","time_end":"","time_stamp":"","time_start":"","title":"ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging","uid":"v-full-1427","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1438":{"abstract":"Differential privacy ensures the security of individual privacy but poses challenges to data exploration processes because the limited privacy budget incapacitates the flexibility of exploration and the noisy feedback of data requests leads to confusing uncertainty. In this study, we take the lead in describing corresponding exploration scenarios, including underlying requirements and available exploration strategies. To facilitate practical applications, we propose a visual analysis approach to the formulation of exploration strategies. Our approach applies a reinforcement learning model to provide diverse suggestions for exploration strategies according to the exploration intent of users. A novel visual design for representing uncertainty in correlation patterns is integrated into our prototype system to support the proposed approach. Finally, we implemented a user study and two case studies. The results of these studies verified that our approach can help develop strategies that satisfy the exploration intent of users.","accessible_pdf":false,"authors":[{"affiliations":["Nankai University, Tianjin, China"],"email":"wangxumeng@nankai.edu.cn","is_corresponding":true,"name":"Xumeng Wang"},{"affiliations":["Nankai University, Tianjin, China"],"email":"jiaoshuangcheng@mail.nankai.edu.cn","is_corresponding":false,"name":"Shuangcheng Jiao"},{"affiliations":["Arizona State University, Tempe, United States"],"email":"cbryan16@asu.edu","is_corresponding":false,"name":"Chris Bryan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xumeng Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1438","time_end":"","time_stamp":"","time_start":"","title":"Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy","uid":"v-full-1438","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1446":{"abstract":"We are currently witnessing an increase in web-based, data-driven initiatives that explain complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. Many of these projects call themselves \"atlases\", a term that historically referred to collections of maps or scientific illustrations. To answer the question of what makes a \"visualization atlas\", we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of visualization atlases as an emerging format to present complex topics in a holistic, data-driven, and curated way through visualization, (2) a set of design patterns and design dimensions that led to (3) defining 5 visualization atlas genres, and (4) insights into the atlas creation from interviews. We found that visualization atlases are unique in that they combine exploratory visualization with narrative elements from data-driven storytelling and structured navigation mechanisms. They can act as a reference, communication or discovery tools targeting a wide range of audiences with different levels of domain knowledge. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases.","accessible_pdf":false,"authors":[{"affiliations":["The University of Edinburgh, Edinburgh, United Kingdom"],"email":"jinrui.w@outlook.com","is_corresponding":true,"name":"Jinrui Wang"},{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"uhinrich@ed.ac.uk","is_corresponding":false,"name":"Uta Hinrichs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jinrui Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1446","time_end":"","time_stamp":"","time_start":"","title":"Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration","uid":"v-full-1446","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1451":{"abstract":"We present a systematic review, an empirical study, and a first set of considerations for designing visualizations in motion, derived from a concrete scenario in which these visualizations were used to support a primary task. In practice, when viewers are confronted with embedded visualizations, they often have to focus on a primary task and can only quickly glance at a visualization showing rich, often dynamically updated, information. As such, the visualizations must be designed so as not to distract from the primary task, while at the same time being readable and useful for aiding the primary task. For example, in games, players who are engaged in a battle have to look at their enemies but also read the remaining health of their own game character from the health bar over their character's head. Many trade-offs are possible in the design of embedded visualizations in such dynamic scenarios, which we explore in-depth in this paper with a focus on user experience. We use video games as an example of an application context with a rich existing set of visualizations in motion. We begin our work with a systematic review of in-game visualizations in motion. Next, we conduct an empirical user study to investigate how different embedded visualizations in motion designs impact user experience. We conclude with a set of considerations and trade-offs for designing visualizations in motion more broadly as derived from what we learned about video games. All supplemental materials of this paper are available at osf.io/3v8wm/.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"yaolijie0219@gmail.com","is_corresponding":true,"name":"Lijie Yao"},{"affiliations":["Univerisit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"federicabucchieri@gmail.com","is_corresponding":false,"name":"Federica Bucchieri"},{"affiliations":["Carleton University, Ottawa, Canada"],"email":"dieselfish@gmail.com","is_corresponding":false,"name":"Victoria McArthur"},{"affiliations":["LISN, Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"anastasia.bezerianos@universite-paris-saclay.fr","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lijie Yao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1451","time_end":"","time_stamp":"","time_start":"","title":"User Experience of Visualizations in Motion: A Case Study and Design Considerations","uid":"v-full-1451","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1461":{"abstract":"This paper presents a practical approach for the optimization of topological simplification, a central pre-processing step for the analysis and visualization of scalar data. Given an input scalar field f and a set of \u201csignal\u201d persistence pairs to maintain, our approaches produces an output field g that is close to f and which optimizes (i) the cancellation of \u201cnon-signal\u201d pairs, while (ii) preserving the \u201csignal\u201d pairs. In contrast to pre-existing simplification approaches, our method is not restricted to persistence pairs involving extrema and can thus address a larger class of topological features, in particular saddle pairs in three-dimensional scalar data. Our approach leverages recent generic persistence optimization frameworks and extends them with tailored accelerations specific to the problem of topological simplification. Extensive experiments report substantial accelerations over these frameworks, thereby making topological simplification optimization practical for real-life datasets. Our work enables a direct visualization and analysis of the topologically simplified data, e.g., via isosurfaces of simplified topology (fewer components and handles). We apply our approach to the extraction of prominent filament structures in three-dimensional data. Specifically, we show that our pre-simplification of the data leads to practical improvements over standard topological techniques for removing filament loops. We also show how our framework can be used to repair genus defects in surface processing. Finally, we provide a C++ implementation for reproducibility purposes.","accessible_pdf":false,"authors":[{"affiliations":["CNRS, Paris, France","SORBONNE UNIVERSITE, Paris, France"],"email":"mohamed.kissi@lip6.fr","is_corresponding":true,"name":"Mohamed KISSI"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"mathieu.pont@lip6.fr","is_corresponding":false,"name":"Mathieu Pont"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"josh@cs.arizona.edu","is_corresponding":false,"name":"Joshua A Levine"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mohamed KISSI"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1461","time_end":"","time_stamp":"","time_start":"","title":"A Practical Solver for Scalar Data Topological Simplification","uid":"v-full-1461","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1472":{"abstract":"Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, an approach for extracting and modeling visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines---DracoGPT-Rank and DracoGPT-Recommend---to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT models the preferences expressed by LLMs well, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantively diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and serve as a reliable and cost-effective stand-in for LLMs.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"wwill@cs.washington.edu","is_corresponding":true,"name":"Huichen Will Wang"},{"affiliations":["University of Washington, Seattle, United States"],"email":"mgord@cs.stanford.edu","is_corresponding":false,"name":"Mitchell L. Gordon"},{"affiliations":["University of Washington, Seattle, United States"],"email":"leibatt@cs.washington.edu","is_corresponding":false,"name":"Leilani Battle"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Huichen Will Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1472","time_end":"","time_stamp":"","time_start":"","title":"DracoGPT: Extracting Visualization Design Preferences from Large Language Models","uid":"v-full-1472","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1474":{"abstract":"Recent advancements in Large Language Models (LLMs) and Prompt Engineering have made chatbot customization more accessible, significantly reducing barriers to tasks that previously required programming skills. However, prompt evaluation, especially at the dataset scale, remains complex due to the need to assess prompts across thousands of test instances within a dataset. Our study, based on a comprehensive literature review and pilot study, summarized five critical challenges in prompt evaluation. In response, we introduce a feature-oriented workflow for systematic prompt evaluation, focusing on text summarization. Our workflow advocates feature metrics such as complexity, formality, or naturalness, instead of using traditional quality metrics like ROUGE. This design choice enables a more user-friendly evaluation of prompts, as it guides users in sorting through the ambiguity inherent in natural language. To support this workflow, we introduce Awesum, a visual analytics system that facilitates identifying optimal prompt refinements through interactive visualizations, featuring a novel Prompt Comparator design that employs a BubbleSet-inspired design enhanced by dimensionality reduction techniques. We evaluate the effectiveness and general applicability of the system with practitioners from various domains and found that (1) our design helps overcome the learning curve for non-technical people to conduct a systematic evaluation, and (2) our feature-oriented workflow has the potential to generalize to other NLG and image-generation tasks. For future works, we advocate moving towards feature-oriented evaluation of LLM prompts and discuss unsolved challenges in terms of human-agent interaction.","accessible_pdf":false,"authors":[{"affiliations":["University of California Davis, Davis, United States"],"email":"ytlee@ucdavis.edu","is_corresponding":true,"name":"Sam Yu-Te Lee"},{"affiliations":["University of California, Davis, Davis, United States"],"email":"abahukhandi@ucdavis.edu","is_corresponding":false,"name":"Aryaman Bahukhandi"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"dyuliu@ucdavis.edu","is_corresponding":false,"name":"Dongyu Liu"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"ma@cs.ucdavis.edu","is_corresponding":false,"name":"Kwan-Liu Ma"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sam Yu-Te Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1474","time_end":"","time_stamp":"","time_start":"","title":"Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts","uid":"v-full-1474","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1480":{"abstract":"We propose the notion of Attention-aware Visualizations (AAVs) that track the user's perception of a visual representation over time and feed this information back to the visualization.This idea is particularly useful for ubiquitous and immersive analytics where knowing which embedded visualizations the user is looking at can be used to make visualizations react appropriately to the user's attention: for example, by highlighting data the user has not yet seen. We can separate the approach into three components: (1) measuring the user's gaze on a visualization and its parts; (2) tracking the user's attention over time; and (3) reactively modifying the visual representation based on the current attention metric. In this paper, we present two separate implementations of AAV: a 2D numeric integration of attention for web-based visualizations that can use an embodied eye-tracker to capture the user's gaze, and a 3D implementation that uses the stencil buffer to track the visibility of each individual mark in a visualization. Both methods provide similar mechanisms for accumulating attention over time and changing the appearance of marks in response. We also present results from a controlled laboratory experiment studying different visual feedback mechanisms for attention.","accessible_pdf":false,"authors":[{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"arvind@cs.au.dk","is_corresponding":true,"name":"Arvind Srinivasan"},{"affiliations":["Aarhus University, Aarhus N, Denmark"],"email":"johannes@ellemose.eu","is_corresponding":false,"name":"Johannes Ellemose"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.butcher@bangor.ac.uk","is_corresponding":false,"name":"Peter W. S. Butcher"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.ritsos@bangor.ac.uk","is_corresponding":false,"name":"Panagiotis D. Ritsos"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arvind Srinivasan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1480","time_end":"","time_stamp":"","time_start":"","title":"Attention-Aware Visualization: Tracking and Responding to User Perception Over Time","uid":"v-full-1480","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1483":{"abstract":"Egocentric networks, often visualized as node-link diagrams, portray the complex relationship (link) dynamics between an entity (node) and others. However, common analytics tasks are multifaceted, encompassing interactions among four key aspects: strength, function, structure, and content. Current node-link visualization designs may fall short, focusing narrowly on certain aspects and neglecting the holistic, dynamic nature of egocentric networks. To bridge this gap, we introduce SpreadLine, a novel visualization framework designed to enable the visual exploration of egocentric networks from these four aspects at the microscopic level. Leveraging the intuitive appeal of storyline visualizations, SpreadLine adopts a storyline-based design to represent entities and their evolving relationships. We further encode essential topological information in the layout and condense the contextual information in a metro map metaphor, allowing for a more engaging and effective way to explore temporal and attribute-based information. To guide our work, with a thorough review of pertinent literature, we have distilled a task taxonomy that addresses the analytical needs specific to egocentric network exploration. Acknowledging the diverse analytical requirements of users, SpreadLine offers customizable encodings to enable users to tailor the framework for their tasks. We demonstrate the efficacy and general applicability of SpreadLine through three diverse real-world case studies and a usability study.","accessible_pdf":false,"authors":[{"affiliations":["University of California, Davis, Davis, United States"],"email":"yskuo@ucdavis.edu","is_corresponding":true,"name":"Yun-Hsin Kuo"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"dyuliu@ucdavis.edu","is_corresponding":false,"name":"Dongyu Liu"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"ma@cs.ucdavis.edu","is_corresponding":false,"name":"Kwan-Liu Ma"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yun-Hsin Kuo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1483","time_end":"","time_stamp":"","time_start":"","title":"SpreadLine: Visualizing Egocentric Dynamic Influence","uid":"v-full-1483","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1487":{"abstract":"Referential gestures, or as termed in linguistics, {\\em deixis}, are an essential part of communication around data visualizations. Despite their importance, such gestures are often overlooked when documenting data analysis meetings. Transcripts, for instance, fail to capture gestures, and video recordings may not adequately capture or emphasize them. We introduce a novel method for documenting collaborative data meetings that treats deixis as a first-class citizen. Our proposed framework captures cursor-based gestural data along with audio and converts them into interactive documents. The framework leverages a large language model to identify word correspondences with gestures. These identified references are used to create context-based annotations in the resulting interactive document. We assess the effectiveness of our proposed method through a user study, finding that participants preferred our automated interactive documentation over recordings, transcripts, and manual note-taking. Furthermore, we derive a preliminary taxonomy of cursor-based deictic gestures from participant actions during the study. This taxonomy offers further opportunities for better utilizing cursor-based deixis in collaborative data analysis scenarios.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"hatch.on27@gmail.com","is_corresponding":true,"name":"Chang Han"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chang Han"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1487","time_end":"","time_stamp":"","time_start":"","title":"A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations","uid":"v-full-1487","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1488":{"abstract":"A year ago, we submitted an IEEE VIS paper entitled \u201cSwaying the Public? Impacts of Election Forecast Visualizations on Emotion, Trust, and Intention in the 2022 U.S. Midterms\u201d [68], which was later bestowed with the honor of a best paper award. Yet, studying such a complex phenomenon required us to explore many more design paths than we could count, and certainly more than we could document in a single paper. This paper, then, is the unwritten prequel\u2014the backstory. It chronicles our journey from a simple idea\u2014to study visualizations for election forecasts\u2014through obstacles such as developing meaningfully different, easy-to-understand forecast visualizations, crafting professional-looking forecasts, and grappling with how to study perceptions of the forecasts before, during, and after the 2022 U.S. midterm elections. Our backstory began with developing a design space for two-party election forecasts, de\ufb01ning dimensions such as data transformations, visual channels, layouts, and types of animated narratives. We then qualitatively evaluated ten representative prototypes in this design space through interviews with 13 participants. The interviews yielded invaluable insights into how people interpret uncertainty visualizations and reason about probability in a U.S. election context, such as confounding win probability with vote share and erroneously forming connections between concrete visual representations (like dots) and real-world entities (like votes). Informed by these insights, we revised our prototypes to address ambiguity in interpreting visual encodings, particularly through the inclusion of extensive annotations. As we navigated these design paths, we contributed a design space and insights that may help others when designing uncertainty visualizations. We also hope that our design lessons and research process can inspire the research community when exploring topics related to designing visualizations for the general public.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"fumeng.p.yang@gmail.com","is_corresponding":true,"name":"Fumeng Yang"},{"affiliations":["Northwestern University, Evanston, United States","Northwestern University, Evanston, United States"],"email":"mandicai2028@u.northwestern.edu","is_corresponding":false,"name":"Mandi Cai"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"chloemortenson2026@u.northwestern.edu","is_corresponding":false,"name":"Chloe Rose Mortenson"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"hoda@u.northwestern.edu","is_corresponding":false,"name":"Hoda Fakhari"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"aysedlokmanoglu@gmail.com","is_corresponding":false,"name":"Ayse Deniz Lokmanoglu"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"nicholas.diakopoulos@gmail.com","is_corresponding":false,"name":"Nicholas Diakopoulos"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"erik.nisbet@northwestern.edu","is_corresponding":false,"name":"Erik Nisbet"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fumeng Yang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1488","time_end":"","time_stamp":"","time_start":"","title":"The Backstory to \u201cSwaying the Public\u201d: A Design Chronicle of Election Forecast Visualizations","uid":"v-full-1488","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1489":{"abstract":"Projecting high-dimensional vectors into two dimensions for visualization, known as embedding visualization, facilitates perceptual reasoning and interpretation. Comparison of multiple embedding visualizations drives decision-making in many domains, but conventional comparison methods are limited by a reliance on direct point correspondences. This requirement precludes embedding comparisons without point correspondences, such as two different datasets of annotated images, and fails to capture meaningful higher-level relationships among point groups. To address these shortcomings, we propose a general framework to compare embedding visualizations based on shared class labels rather than individual points. Our approach partitions points into regions corresponding to three key class concepts---confusion, neighborhood, and relative size---to characterize intra- and inter-class relationships. Informed by a preliminary user study, we realize an implementation of our framework using perceptual neighborhood graphs to define these regions and introduce metrics to quantify each concept. We demonstrate the generality of our framework with use cases from machine learning and single-cell biology, highlighting our metrics' ability to surface insightful comparisons across label hierarchies. To assess the effectiveness of our approach, we conducted a user study with five machine learning researchers and six single-cell biologists using an interactive and scalable prototype developed in Python and Rust. Our metrics enable more structured comparison through visual guidance and increased participants\u2019 confidence in their findings.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"trevor_manz@g.harvard.edu","is_corresponding":true,"name":"Trevor Manz"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"f.lekschas@gmail.com","is_corresponding":false,"name":"Fritz Lekschas"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"palmergreene@gmail.com","is_corresponding":false,"name":"Evan Greene"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"greg@ozette.com","is_corresponding":false,"name":"Greg Finak"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Trevor Manz"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1489","time_end":"","time_stamp":"","time_start":"","title":"A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies","uid":"v-full-1489","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1494":{"abstract":"Topological abstractions offer a method to summarize the behavior of vector fields, but computing them robustly can be challenging due to numerical precision issues. One alternative is to represent the vector field using a discrete approach, which constructs a collection of pairs of simplices in the input mesh that satisfies criteria introduced by Forman\u2019s discrete Morse theory. While numerous approaches exist to compute pairs in the restricted case of the gradient of a scalar field, state-of-the-art algorithms for the general case of vector fields require expensive optimization procedures. This paper introduces a fast, novel approach for pairing simplices of two-dimensional, triangulated vector fields that do not vary in time. The key insight of our approach is that we can employ a local evaluation, inspired by the approach used to construct a discrete gradient field, where every cell in a mesh is considered by no more than one of its vertices. Specifically, we observe that for any edge in the input mesh, we can uniquely assign an outward direction of flow. We can further expand this consistent notion of outward flow at each vertex, which corresponds to the concept of a downhill flow in the case of scalar fields. Working with outward flow enables a linear-time algorithm that processes the (outward) neighborhoods of each vertex one-by-one, similar to the approach used for scalar fields. We couple our approach to constructing discrete vector fields with a method to extract, simplify, and visualize topological features. Empirical results on analytic and simulation data demonstrate drastic improvements in running time, produce features similar to the current state-of-the-art, and show the application of simplification to large, complex flows.","accessible_pdf":false,"authors":[{"affiliations":["University of Arizona, Tucson, United States"],"email":"finkent@arizona.edu","is_corresponding":true,"name":"Tanner Finken"},{"affiliations":["Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"josh@cs.arizona.edu","is_corresponding":false,"name":"Joshua A Levine"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tanner Finken"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1494","time_end":"","time_stamp":"","time_start":"","title":"Localized Evaluation for Constructing Discrete Vector Fields","uid":"v-full-1494","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1500":{"abstract":"Haptic feedback provides an essential sensory stimulus crucial for interacting and analyzing three-dimensional spatio-temporal phenomena on surface visualizations. Given its ability to provide enhanced spatial perception and scene maneuverability, virtual reality (VR) catalyzes haptic interactions on surface visualizations. Various interaction modes, encompassing both mid-air and on-surface interactions---with or without the application of assisting force stimuli---have been explored using haptic force feedback devices. In this paper, we evaluate the use of on-surface and assisted on-surface haptic modes of interaction compared to a no-haptic interaction mode. A force-based haptic stylus is used for all three modalities; the on-surface mode uses collision based forces, whereas the assisted on-surface mode is accompanied by an additional snapping force. We conducted a within-subjects user study involving fundamental interaction tasks performed on surface visualizations. Keeping a consistent visual design across all three modes, our study incorporates tasks that require the localization of the highest, lowest, and random points on surfaces; and tasks that focus on brushing curves on surfaces with varying complexity and occlusion levels. Our findings show that participants took almost the same time to brush curves using all the interaction modes. They could draw smoother curves using the on-surface interaction modes compared to the no-haptic mode. However, the assisted on-surface mode provided better accuracy than the on-surface mode. The on-surface mode was slower in point localization, but the accuracy depended on the visual cues and occlusions associated with the tasks. Finally, we discuss participant feedback on using haptic force feedback as a tangible input modality and share takeaways to aid the design of haptics-based tangible interactions for surface visualizations.","accessible_pdf":false,"authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"hamza.afzaal@ucalgary.ca","is_corresponding":true,"name":"Hamza Afzaal"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"ualim@ucalgary.ca","is_corresponding":false,"name":"Usman Alim"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hamza Afzaal"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1500","time_end":"","time_stamp":"","time_start":"","title":"Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations","uid":"v-full-1500","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1502":{"abstract":"Visualization is widely used for exploring personal data, but many visualization authoring systems do not support expressing data in flexible, personal, and organic layouts. Sketching is an accessible tool for experimenting with visualization designs, but formalizing sketched elements into structured data representations is difficult, as modifying hand-drawn glyphs to encode data when available is labour-intensive and error prone. We propose an approach where authors structure their own expressive templates, capturing implicit style as well as explicit data mappings, through sketching a representative visualization for an envisioned or partial dataset. Our approach seeks to support freeform exploration and partial specification, balanced against interactive machine support for specifying the generative procedural rules. We implement this approach in DataGarden, a system designed to support hierarchical data visualizations, and evaluate it with 12 participants in a reproduction study and four experts in a freeform creative task. Participants readily picked up the core idea of template authoring, and the variety of workflows we observed highlight how this process serves design and data ideation as well as visual constraint iteration. We discuss challenges in implementing the design considerations underpinning DataGarden, and illustrate its potential in a gallery of visualizations generated from authored templates.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, Orsay, France"],"email":"anna.offenwanger@gmail.com","is_corresponding":true,"name":"Anna Offenwanger"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Inria, LISN, Orsay, France"],"email":"theophanis.tsandilas@inria.fr","is_corresponding":false,"name":"Theophanis Tsandilas"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":false,"name":"Fanny Chevalier"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anna Offenwanger"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1502","time_end":"","time_stamp":"","time_start":"","title":"DataGarden: Formalizing Personal Sketches into Structured Visualization Templates","uid":"v-full-1502","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1503":{"abstract":"The increasing reliance on Large Language Models (LLMs) for health information seeking can pose severe risks due to the potential for misinformation and the complexity of these topics. This paper introduces KnowNet, a visualization system that integrates LLMs with Knowledge Graphs (KG) to provide enhanced accuracy and structured exploration. One core idea in KnowNet is to conceptualize the understanding of a subject as the gradual construction of graph visualization, aligning the user's cognitive process with both the structured data in KGs and the unstructured outputs from LLMs. Specifically, we extracted triples (e.g., entities and their relations) from LLM outputs and mapped them into the validated information and supported evidence in external KGs. Based on the neighborhood of the currently explored entities in KGs, KnowNet provides recommendations for further inquiry, aiming to guide a comprehensive understanding without overlooking critical aspects. A progressive graph visualization is proposed to show the alignment between LLMs and KGs, track previous inquiries, and connect this history with current queries and next-step recommendations. We demonstrate the effectiveness of our system via use cases and expert interviews.","accessible_pdf":false,"authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"yan00111@umn.edu","is_corresponding":false,"name":"Youfu Yan"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"hou00127@umn.edu","is_corresponding":false,"name":"Yu Hou"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"xiao0290@umn.edu","is_corresponding":false,"name":"Yongkang Xiao"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"zhan1386@umn.edu","is_corresponding":false,"name":"Rui Zhang"},{"affiliations":["University of Minnesota, Minneapolis , United States"],"email":"qianwen@umn.edu","is_corresponding":true,"name":"Qianwen Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qianwen Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1503","time_end":"","time_stamp":"","time_start":"","title":"Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration","uid":"v-full-1503","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1504":{"abstract":"A wide range of visualization authoring interfaces enable the creation of highly customized visualizations. However, prioritizing expressiveness often impedes the learnability of the authoring interface. The diversity of users, such as varying computational skills and prior experiences in user interfaces, makes it even more challenging for a single authoring interface to satisfy the needs of a broad audience. In this paper, we introduce a framework to balance learnability and expressivity in a visualization authoring system. Adopting insights from learnability studies, such as multimodal interaction and visualization literacy, we explore the design space of blending multiple visualization authoring interfaces for supporting authoring tasks in a complementary and flexible manner. To evaluate the effectiveness of blending interfaces, we implemented a proof-of-concept system, Blace, that combines four common visualization authoring interfaces\u2014template-based, shelf configuration, natural language, and code editor\u2014that are tightly linked to one another to help users easily relate unfamiliar interfaces to more familiar ones. Using the system, we conducted a user study with 12 domain experts who regularly visualize genomics data as part of their analysis workflow. Participants with varied visualization and programming backgrounds were able to successfully reproduce complex visualization examples without a guided tutorial in the study. Feedback from a post-study qualitative questionnaire further suggests that blending interfaces enabled participants to learn the system easily and assisted them in confidently editing unfamiliar visualization grammar in the code editor, enabling expressive customization. Reflecting on our study results and the design of our system, we discuss the different interaction patterns that we identified and design implications for blending visualization authoring interfaces.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"sehi_lyi@hms.harvard.edu","is_corresponding":true,"name":"Sehi L'Yi"},{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.v.d.brandt@tue.nl","is_corresponding":false,"name":"Astrid van den Brandt"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"etowah_adams@hms.harvard.edu","is_corresponding":false,"name":"Etowah Adams"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"huyen_nguyen@hms.harvard.edu","is_corresponding":false,"name":"Huyen N. Nguyen"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sehi L'Yi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1504","time_end":"","time_stamp":"","time_start":"","title":"Learnable and Expressive Visualization Authoring Through Blended Interfaces","uid":"v-full-1504","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1522":{"abstract":"Despite the recent surge of research efforts to make data visualizations accessible to people who are blind or have low-vision (BLV), how to support BLV people's data analysis remains an important and challenging question. As refreshable tactile displays (RTDs) become cheaper and conversational agents continue to improve, their combination provides a promising approach to support BLV people's interactive data analysis. To understand how BLV people would use and react to a system combining an RTD with a conversational agent, we conducted a Wizard-of-Oz study with 11 BLV participants involving line graphs, bar charts, and isarithmic maps. From an analysis of participant interactions, we identified nine distinct patterns and learned that the choice of modalities depended on the type of task and prior experience with tactile graphics. We also found that participants strongly preferred the combination of RTD and speech to a single modality, and that participants with more tactile experience described how tactile images facilitated deeper engagement with the data and supported independent interpretation. Our findings will inform the design of interfaces for such interactive mixed-modality systems.","accessible_pdf":false,"authors":[{"affiliations":["Monash University, Melbourne, Australia"],"email":"samuel.reinders@monash.edu","is_corresponding":true,"name":"Samuel Reinders"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"matthew.butler@monash.edu","is_corresponding":false,"name":"Matthew Butler"},{"affiliations":["Monash University, Clayton, Australia"],"email":"ingrid.zukerman@monash.edu","is_corresponding":false,"name":"Ingrid Zukerman"},{"affiliations":["Yonsei University, Seoul, Korea, Republic of","Microsoft Research, Redmond, United States"],"email":"b.lee@yonsei.ac.kr","is_corresponding":false,"name":"Bongshin Lee"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"lizhen.qu@monash.edu","is_corresponding":false,"name":"Lizhen Qu"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"kim.marriott@monash.edu","is_corresponding":false,"name":"Kim Marriott"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Samuel Reinders"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1522","time_end":"","time_stamp":"","time_start":"","title":"When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech","uid":"v-full-1522","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1533":{"abstract":"We introduce DiffFit, a differentiable algorithm for fitting protein atomistic structures into experimental reconstructed Cryo-Electron Microscopy (cryo-EM) volume map. This process is essential in structural biology to semi-automatically reconstruct large meso-scale models of complex protein assemblies and complete cellular structures that are based on measured cryo-EM data. Current approaches require manual fitting in 3D that already results in approximately aligned structures followed by an automated fine-tuning of the alignment. With our DiffFit approach, we enable domain scientists to automatically fit new structures and visualize the fitting results for inspection and interactive revision. Our fitting begins with differentiable 3D rigid transformations of the protein atom coordinates, followed by sampling the density values at its atom coordinates from the target cryo-EM volume. To ensure a meaningful correlation between the sampled densities and the protein structure, we propose a novel loss function based on a multi-resolution volume-array approach and the exploitation of the negative space. Such loss function serves as a critical metric for assessing the fitting quality, ensuring both fitting accuracy and improved visualization of the results. We assessed the placement quality of DiffFit with several large, realistic datasets and found its quality to be superior to that of previous methods. We further evaluated our method in two use cases. First, we demonstrate its use in the process of automating the integration of known composite structures into larger protein complexes. Second, we show that it facilitates the fitting of predicted protein domains into volume densities to aid researchers in the identification of unknown proteins. We implemented our algorithm as an open-source plugin (github.com/nanovis/DiffFitViewer) in ChimeraX, a leading visualization software in the field. All supplemental materials are available at osf.io/5tx4q.","accessible_pdf":false,"authors":[{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"deng.luo@kaust.edu.sa","is_corresponding":true,"name":"Deng Luo"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"zainab.alsuwaykit@kaust.edu.sa","is_corresponding":false,"name":"Zainab Alsuwaykit"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"dawar.khan@kaust.edu.sa","is_corresponding":false,"name":"Dawar Khan"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"ondrej.strnad@kaust.edu.sa","is_corresponding":false,"name":"Ond\u0159ej Strnad"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"ivan.viola@kaust.edu.sa","is_corresponding":false,"name":"Ivan Viola"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Deng Luo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1533","time_end":"","time_stamp":"","time_start":"","title":"DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map","uid":"v-full-1533","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1544":{"abstract":"Large Language Models (LLMs) have been successfully adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways from visualizations? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as the spatial arrangement. In this work, we examine how well LLMs can predict such design choice sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We test four common chart arrangements: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked, through three experimental phases. In Phase I, we identified the optimal configuration of LLMs to generate meaningful chart takeaways, across three LLM models (GPT3.5, GPT4, GPT4V, and Gemini 1.0 Pro), two temperature settings (0, 0.7), four chart specifications (Vega-Lite, Matplotlib, ggplot2, and scene graphs), and several prompting strategies. We found that even state-of-the-art LLMs can struggle to generate factually accurate takeaways. In Phase 2, using the most optimal LLM configuration, we generated 30 chart takeaways across the four arrangements of bar charts using two datasets, with both zero-shot and one-shot settings. Compared to data on human takeaways from prior work, we found that the takeaways LLMs generate often do not align with human comparisons. In Phase 3, we examined the effect of the charts\u2019 underlying data values on takeaway alignment between humans and LLMs, and found both matches and mismatches. Overall, our work evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human-aligned chart takeaways.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"wwill@cs.washington.edu","is_corresponding":true,"name":"Huichen Will Wang"},{"affiliations":["Adobe Research, Seattle, United States"],"email":"jhoffs@adobe.com","is_corresponding":false,"name":"Jane Hoffswell"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"yukithane@gmail.com","is_corresponding":false,"name":"Sao Myat Thazin Thane"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"victorbursztyn2022@u.northwestern.edu","is_corresponding":false,"name":"Victor S. Bursztyn"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"cxiong@gatech.edu","is_corresponding":false,"name":"Cindy Xiong Bearfield"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Huichen Will Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1544","time_end":"","time_stamp":"","time_start":"","title":"How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts","uid":"v-full-1544","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1547":{"abstract":"Visual validation of regression models in scatterplots is a common practice for assessing model quality, yet its efficacy remains unquantified. We conducted two empirical experiments to investigate individuals' ability to visually validate linear regression models (linear trends) and to examine the impact of common visualization designs on validation quality. The first experiment showed that the level of accuracy for visual estimation of slope (i.e., fitting a line to data) is higher than for visual validation of slope (i.e., accepting a shown line). Notably, we found bias toward slopes that are ''too steep'' in both cases. This lead to novel insights that participants naturally assessed regression with orthogonal distances between the points and the line (i.e., ODR regression) rather than the common vertical distances (OLS regression). In the second experiment, we investigated whether incorporating common designs for regression visualization (error lines, bounding boxes, and confidence intervals) would improve visual validation. Even though error lines reduced validation bias, results failed to show the desired improvements in accuracy for any design. Overall, our findings suggest caution in using visual model validation for linear trends in scatterplots.","accessible_pdf":false,"authors":[{"affiliations":["University of Cologne, Cologne, Germany"],"email":"braun@cs.uni-koeln.de","is_corresponding":true,"name":"Daniel Braun"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"},{"affiliations":["University of Wisconsin - Madison, Madison, United States"],"email":"gleicher@cs.wisc.edu","is_corresponding":false,"name":"Michael Gleicher"},{"affiliations":["University of Cologne, Cologne, Germany"],"email":"landesberger@cs.uni-koeln.de","is_corresponding":false,"name":"Tatiana von Landesberger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Braun"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1547","time_end":"","time_stamp":"","time_start":"","title":"Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots","uid":"v-full-1547","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1568":{"abstract":"Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns in dimensionality reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections.","accessible_pdf":false,"authors":[{"affiliations":["Tufts University, Medford, United States"],"email":"brianmontambault@gmail.com","is_corresponding":true,"name":"Brian Montambault"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["Tufts University, Boston, United States"],"email":"jen@cs.tufts.edu","is_corresponding":false,"name":"Jen Rogers"},{"affiliations":["Tufts University, Medford, United States"],"email":"camelia_daniela.brumar@tufts.edu","is_corresponding":false,"name":"Camelia D. Brumar"},{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"mingwei.li@tufts.edu","is_corresponding":false,"name":"Mingwei Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Brian Montambault"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1568","time_end":"","time_stamp":"","time_start":"","title":"DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic","uid":"v-full-1568","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1571":{"abstract":"Effective security patrol management is critical for ensuring safety in diverse environments such as art galleries, airports, and factories. The behavior of patrols in these situations can be modeled by patrolling games. They simulate the behavior of the patrol and adversary in the building, which is modeled as a graph of interconnected nodes representing rooms. The designers of algorithms solving the game face the problem of analyzing complex graph layouts with temporal dependencies. Therefore, appropriate visual support is crucial for them to work effectively. In this paper, we present a novel tool that helps the designers of patrolling games explore the outcomes of the proposed algorithms and approaches, evaluate their success rate, and propose modifications that can improve their solutions. Our tool offers an intuitive and interactive interface, featuring a detailed exploration of patrol routes and probabilities of taking them, simulation of patrols, and other requested features. In close collaboration with experts in designing patrolling games, we conducted three case studies demonstrating the usage and usefulness of our tool. The prototype of the tool, along with exemplary datasets, is available at https://gitlab.fi.muni.cz/formela/strategy-vizualizer.","accessible_pdf":false,"authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"langm@mail.muni.cz","is_corresponding":true,"name":"Mat\u011bj Lang"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"469242@mail.muni.cz","is_corresponding":false,"name":"Adam \u0160t\u011bp\u00e1nek"},{"affiliations":["Faculty of Informatics, Masaryk University, Brno, Czech Republic"],"email":"514179@mail.muni.cz","is_corresponding":false,"name":"R\u00f3bert Zvara"},{"affiliations":["Faculty of Informatics, Masaryk University, Brno, Czech Republic"],"email":"rehak@fi.muni.cz","is_corresponding":false,"name":"Vojt\u011bch \u0158eh\u00e1k"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mat\u011bj Lang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1571","time_end":"","time_stamp":"","time_start":"","title":"Who Let the Guards Out: Visual Support for Patrolling Games","uid":"v-full-1571","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1574":{"abstract":"The numerical extraction of vortex cores from time-dependent fluid flow attracted much attention over the past decades. A commonly agreed upon vortex definition remained elusive since a proper vortex core needs to satisfy two hard constraints: it must be objective and Lagrangian. Recent methods on objectivization met the first but not the second constraint, since there was no formal guarantee that the resulting vortex coreline is indeed a pathline of the fluid flow. In this paper, we propose the first vortex core definition that is both objective and Lagrangian. Our approach restricts observer motions to follow along pathlines, which reduces the degrees of freedoms: we only need to optimize for an observer rotation that makes the observed flow as steady as possible. This optimization succeeds along Lagrangian vortex corelines and will result in a non-zero time-partial everywhere else. By performing this optimization at each point of a spatial grid, we obtain a residual scalar field, which we call vortex deviation error. The local minima on the grid serve as seed points for a gradient descent optimization that delivers sub-voxel accurate corelines. The visualization of both 2D and 3D vortex cores is based on the separation of the movement of the vortex core and the swirling flow behavior around it. While the vortex core is represented by a pathline, the swirling motion around it is visualized by streamlines in the correct frame. We demonstrate the utility of the approach on several 2D and 3D time-dependent vector fields.","accessible_pdf":false,"authors":[{"affiliations":["Friedrich-Alexander-University Erlangen-N\u00fcrnberg, Erlangen, Germany"],"email":"tobias.guenther@fau.de","is_corresponding":true,"name":"Tobias G\u00fcnther"},{"affiliations":["University of Magdeburg, Magdeburg, Germany"],"email":"theisel@ovgu.de","is_corresponding":false,"name":"Holger Theisel"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias G\u00fcnther"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1574","time_end":"","time_stamp":"","time_start":"","title":"Objective Lagrangian Vortex Cores and their Visual Representations","uid":"v-full-1574","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1594":{"abstract":"The visualization community has a rich history of reflecting upon visualization design flaws. Although research in this area has remained lively, we believe it is essential to continuously revisit this classic and critical topic in visualization research by incorporating more empirical evidence from diverse sources, characterizing new design flaws, building more systematic theoretical frameworks, and understanding the underlying reasons for these flaws. To address the above gaps, this work investigated visualization design flaws through the lens of the public, constructed a framework to summarize and categorize the identified flaws, and explored why these flaws occur. Specifically, we analyzed 2227 flawed data visualizations collected from an online gallery and derived a design task-associated taxonomy containing 76 specific design flaws. These flaws were further classified into three high-level categories (i.e., misinformation, uninformativeness, unsociability) and ten subcategories (e.g., inaccuracy, unfairness, ambiguity). Next, we organized five focus groups to explore why these design flaws occur and identified seven causes of the flaws. Finally, we proposed a research agenda for combating visualization design flaws and summarize nine research opportunities.","accessible_pdf":false,"authors":[{"affiliations":["Fudan University, Shanghai, China","Fudan University, Shanghai, China"],"email":"xingyulan96@gmail.com","is_corresponding":true,"name":"Xingyu Lan"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom","University of Edinburgh, Edinburgh, United Kingdom"],"email":"coraline.liu.dataviz@gmail.com","is_corresponding":false,"name":"Yu Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xingyu Lan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1594","time_end":"","time_stamp":"","time_start":"","title":"I Came Across a Junk: Understanding Design Flaws of Data Visualization from the Public's Perspective","uid":"v-full-1594","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1595":{"abstract":"Assigning discriminable and harmonic colors to samples according to their class labels and spatial distribution can generate attractive visualizations and facilitate data exploration. However, as the number of classes increases, it is challenging to generate a high-quality color assignment result that accommodates all classes simultaneously. A practical solution is to organize classes into a hierarchy and then dynamically assign colors during exploration. However, existing color assignment methods fall short in generating high-quality color assignment results and dynamically aligning them with hierarchical structures. To address this issue, we develop a dynamic color assignment method for hierarchical data, which is formulated as a multi-objective optimization problem. This method simultaneously considers color discriminability, color harmony, and spatial distribution at each hierarchical level. By using the colors of parent classes to guide the color assignment of their child classes, our method further promotes both consistency and clarity across hierarchical levels. We demonstrate the effectiveness of our method in generating dynamic color assignment results with quantitative experiments and a user study.","accessible_pdf":false,"authors":[{"affiliations":["Tsinghua University, Beijing, China"],"email":"jiashu0717c@gmail.com","is_corresponding":true,"name":"Jiashu Chen"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"vicayang496@gmail.com","is_corresponding":false,"name":"Weikai Yang"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"jiazl22@mails.tsinghua.edu.cn","is_corresponding":false,"name":"Zelin Jia"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"tarolancy@gmail.com","is_corresponding":false,"name":"Lanxi Xiao"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"shixia@tsinghua.edu.cn","is_corresponding":false,"name":"Shixia Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jiashu Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1595","time_end":"","time_stamp":"","time_start":"","title":"Dynamic Color Assignment for Hierarchical Data","uid":"v-full-1595","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1597":{"abstract":"In understanding and redesigning the function of proteins in modern biochemistry, protein engineers are increasingly focusing on exploring regions in proteins called loops. Analyzing various characteristics of these regions helps the experts design the transfer of the desired function from one protein to another. This process is denoted as loop grafting. We designed a set of interactive visualizations that provide experts with visual support through all the loop grafting pipeline steps. The workflow is divided into several phases, reflecting the steps of the pipeline. Each phase is supported by a specific set of abstracted 2D visual representations of proteins and their loops that are interactively linked with the 3D View of proteins. By sequentially passing through the individual phases, the user shapes the list of loops that are potential candidates for loop grafting. Finally, the actual in-silico insertion of the loop candidates from one protein to the other is performed, and the results are visually presented to the user. In this way, the fully computational rational design of proteins and their loops results in newly designed protein structures that can be further assembled and tested through in-vitro experiments. We showcase the contribution of our visual support design on a real case scenario changing the enantiomer selectivity of the engineered enzyme. Moreover, we provide the readers with the experts' feedback.","accessible_pdf":false,"authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kiraa@mail.muni.cz","is_corresponding":false,"name":"Filip Op\u00e1len\u00fd"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"paloulbrich@gmail.com","is_corresponding":false,"name":"Pavol Ulbrich"},{"affiliations":["Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital, Brno, Czech Republic"],"email":"joan.planas@mail.muni.cz","is_corresponding":false,"name":"Joan Planas-Iglesias"},{"affiliations":["Masaryk University, Brno, Czech Republic","University of Bergen, Bergen, Norway"],"email":"xbyska@fi.muni.cz","is_corresponding":false,"name":"Jan By\u0161ka"},{"affiliations":["Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital, Brno, Czech Republic"],"email":"stourac.jan@gmail.com","is_corresponding":false,"name":"Jan \u0160toura\u010d"},{"affiliations":["Faculty of Science, Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital Brno, Brno, Czech Republic"],"email":"222755@mail.muni.cz","is_corresponding":false,"name":"David Bedn\u00e1\u0159"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"katarina.furmanova@gmail.com","is_corresponding":true,"name":"Katar\u00edna Furmanov\u00e1"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Katar\u00edna Furmanov\u00e1"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1597","time_end":"","time_stamp":"","time_start":"","title":"Visual Support for the Loop Grafting Workflow on Proteins","uid":"v-full-1597","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1599":{"abstract":"Existing deep learning-based surrogate models facilitate efficient data generation, but fall short in uncertainty quantification, efficient parameter space exploration, and reverse prediction. In our work, we introduce SurroFlow, a novel normalizing flow-based surrogate model, to learn the invertible transformation between simulation parameters and simulation outputs. The model not only allows accurate predictions of simulation outcomes for a given simulation parameter but also supports uncertainty quantification in the data generation process. Additionally, it enables efficient simulation parameter recommendation and exploration. We integrate SurroFlow and a genetic algorithm as the backend of a visual interface to support effective user-guided ensemble simulation exploration and visualization. Our framework significantly reduces the computational costs while enhancing the reliability and exploration capabilities of scientific surrogate models.","accessible_pdf":false,"authors":[{"affiliations":["The Ohio State University, Columbus, United States","The Ohio State University, Columbus, United States"],"email":"shen.1250@osu.edu","is_corresponding":true,"name":"JINGYI SHEN"},{"affiliations":["The Ohio State University, Columbus, United States","The Ohio State University, Columbus, United States"],"email":"duan.418@osu.edu","is_corresponding":false,"name":"Yuhan Duan"},{"affiliations":["The Ohio State University , Columbus , United States","The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["JINGYI SHEN"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1599","time_end":"","time_stamp":"","time_start":"","title":"SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification","uid":"v-full-1599","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1603":{"abstract":"Multi-modal embeddings form the foundation for vision-language models, such as CLIP embeddings, the most widely used text-image embeddings. However, these embeddings are hard to interpret and vulnerable to subtle misalignment of cross-modal features, resulting in decreased model performance and diminished generalization. To address this problem, we design ModalChorus, an interactive system for visual probing and alignment of multi-modal embeddings. ModalChorus primarily offers a two-stage process: 1) embedding probing with Modal Fusion Map (MFM), a novel parametric dimensionality reduction method that integrates both metric and nonmetric objectives to enhance modality fusion; and 2) embedding alignment that allows users to interactively articulate intentions for both point-set and set-set alignments. Quantitative and qualitative comparisons for CLIP embeddings with existing dimensionality reduction (e.g., t-SNE and MDS) and data fusion (e.g., data context map) methods demonstrate the advantages of MFM in showcasing cross-modal features over common vision-language datasets. Case studies reveal that ModalChorus can facilitate intuitive discovery of misalignment and efficient re-alignment in scenarios ranging from zero-shot classification to cross-modal retrieval and generation.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"yyebd@connect.ust.hk","is_corresponding":true,"name":"Yilin Ye"},{"affiliations":["The Hong Kong University of Science and Technology(Guangzhou), Guangzhou, China"],"email":"sxiao713@connect.hkust-gz.edu.cn","is_corresponding":false,"name":"Shishi Xiao"},{"affiliations":["the Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"xingchen.zeng@outlook.com","is_corresponding":false,"name":"Xingchen Zeng"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China","The Hong Kong University of Science and Technology, Hong Kong SAR, China"],"email":"weizeng@hkust-gz.edu.cn","is_corresponding":false,"name":"Wei Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yilin Ye"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1603","time_end":"","time_stamp":"","time_start":"","title":"ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map","uid":"v-full-1603","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1606":{"abstract":"With the increase of graph size, it becomes difficult or even impossible to visualize graph structures clearly within the limited screen space. Consequently, it is crucial to design effective visual representations for large graphs. In this paper, we propose AdaMotif, a novel approach that can capture the essential structure patterns of large graphs and effectively reveal the overall structures via adaptive motif designs. Specifically, our approach involves partitioning a given large graph into multiple subgraphs, then clustering similar subgraphs and extracting similar structural information within each cluster. Subsequently, adaptive motifs representing each cluster are generated and utilized to replace the corresponding subgraphs, leading to a simplified visualization. Our approach aims to preserve as much information from the subgraphs as possible, effectively simplifying graphs while minimizing information loss. Notably, our approach successfully visualizes crucial community information within a large graph. We conduct case studies and a user study using both synthetic and real-world graphs to validate the effectiveness of our proposed approach. The results demonstrate the capability of our approach in simplifying graphs while retaining important structural and community information.","accessible_pdf":false,"authors":[{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"hzhou@szu.edu.cn","is_corresponding":true,"name":"Hong Zhou"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"laipeifeng1111@gmail.com","is_corresponding":false,"name":"Peifeng Lai"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"zhida.sun@connect.ust.hk","is_corresponding":false,"name":"Zhida Sun"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"2310274034@email.szu.edu.cn","is_corresponding":false,"name":"Xiangyuan Chen"},{"affiliations":["Shenzhen University, Shen Zhen, China"],"email":"275621136@qq.com","is_corresponding":false,"name":"Yang Chen"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"hswu@szu.edu.cn","is_corresponding":false,"name":"Huisi Wu"},{"affiliations":["Nanyang Technological University, Singapore, Singapore"],"email":"yong-wang@ntu.edu.sg","is_corresponding":false,"name":"Yong WANG"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hong Zhou"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1606","time_end":"","time_stamp":"","time_start":"","title":"AdaMotif: Graph Simplification via Adaptive Motif Design","uid":"v-full-1606","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1612":{"abstract":"Partitionings (or segmentations) divide a given domain into disjoint connected regions whose union forms again the entire domain. Multi-dimensional partitionings occur, for example, when analyzing parameter spaces of simulation models, where each segment of the partitioning represents a region of similar model behavior. Having computed a partitioning, one is commonly interested in understanding how large the segments are and which segments lie next to each other. While visual representations of 2D domain partitionings that reveal sizes and neighborhoods are straightforward, this is no longer the case when considering multi-dimensional domains of three or more dimensions. We propose an algorithm for computing 2D embeddings of multi-dimensional partitionings. The embedding shall have the following properties: It shall maintain the topology of the partitioning and optimize the area sizes and joint boundary lengths of the embedded segments to match the respective sizes and lengths in the multi-dimensional domain. We demonstrate the effectiveness of our approach by applying it to different use cases, including the visual exploration of 3D spatial domain segmentations and multi-dimensional parameter space partitionings of simulation ensembles. We numerically evaluate our algorithm with respect to how well sizes and lengths are preserved depending on the dimensionality of the domain and the number of segments.","accessible_pdf":false,"authors":[{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"m_ever14@uni-muenster.de","is_corresponding":true,"name":"Marina Evers"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"linsen@uni-muenster.de","is_corresponding":false,"name":"Lars Linsen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Marina Evers"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1612","time_end":"","time_stamp":"","time_start":"","title":"2D Embeddings of Multi-dimensional Partitionings","uid":"v-full-1612","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1613":{"abstract":"We present a path-based design model and system for designing and creating visualisations. Our model represents a systematic approach to constructing visual representations of data or concepts following a predefined sequence of steps. The initial step involves outlining the overall appearance of the visualisation by creating a skeleton structure, referred to as a flowpath. Subsequently, we specify objects, visual marks, properties, and appearance, storing them in a gene. Lastly, we map data onto the flowpath, ensuring suitable morphisms. Alternative designs are created by exchanging values in the gene. For example, designs that share similar traits, are created by making small incremental changes to the gene. Our design method develops a wide variety of creative ideas, space-filling visualisations, and traditional designs (bar chart, pie chart etc.) Our implementation, demonstrates the model, and we apply the output visualisations onto a smart-watch and on visualisation dashboards. In this article we (1) introduce, define and explain the path model and discuss possibilities for its use, (2) present our implementation, results, and evaluation, and (3) demonstrate and evaluate an application of its use on a mobile watch.","accessible_pdf":false,"authors":[{"affiliations":["ExaDev, Gaerwen, United Kingdom","Bangor University, Bangor, United Kingdom"],"email":"james.ogge@gmail.com","is_corresponding":false,"name":"James R Jackson"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.ritsos@bangor.ac.uk","is_corresponding":false,"name":"Panagiotis D. Ritsos"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.butcher@bangor.ac.uk","is_corresponding":false,"name":"Peter W. S. Butcher"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"j.c.roberts@bangor.ac.uk","is_corresponding":true,"name":"Jonathan C Roberts"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jonathan C Roberts"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1613","time_end":"","time_stamp":"","time_start":"","title":"Path-based Design Model for Constructing and Exploring Alternative Visualisations","uid":"v-full-1613","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1615":{"abstract":"We present Cell2Cell, a novel visual analytics approach for quantifying and visualizing networks of cell-cell interactions in three-dimensional (3D) multi-channel cancerous tissue data. By analyzing cellular interactions, biomedical domain experts can gain a more accurate understanding of the intricate relationships between cancer and immune cells. Recent methods have focused on inferring interaction based on the proximity of cells in low-resolution 2D multi-channel imaging data. By contrast, we analyze cell interactions by quantifying the intensities of protein expressions extracted from high-resolution 3D multi-channel volume data. Such analyses have a strong exploratory nature and require a tight integration of domain experts in the analysis loop to leverage their deep knowledge. We propose two complementary semi-automated approaches to cope with the increasing size and complexity of the data in an interactive fashion: On the one hand, we interpret cell-to-cell interactions as edges in a cell graph and analyze the image signal (protein expressions) along those edges, using spatial as well as abstract data visualizations. Complementary, we propose a cell-centered approach, enabling scientists to visually analyze polarized distributions of proteins in three dimensions, which also captures neighboring cells with biochemical and cell biological consequences. We evaluate our application in two case studies, where computational biologists and medical experts use \\tool to investigate tumor micro-environments to identify and quantify T-cell activation in human tissue data. We confirmed that our tool can fully solve both use cases and enables a streamlined and detailed analysis of cell-cell interactions.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"eric.moerth@gmx.at","is_corresponding":true,"name":"Eric M\u00f6rth"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"kevin.sidak@univie.ac.at","is_corresponding":false,"name":"Kevin Sidak"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"zoltan_maliga@hms.harvard.edu","is_corresponding":false,"name":"Zoltan Maliga"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"torsten.moeller@univie.ac.at","is_corresponding":false,"name":"Torsten M\u00f6ller"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"peter_sorger@hms.harvard.edu","is_corresponding":false,"name":"Peter Sorger"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"pfister@seas.harvard.edu","is_corresponding":false,"name":"Hanspeter Pfister"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"jbeyer@g.harvard.edu","is_corresponding":false,"name":"Johanna Beyer"},{"affiliations":["New York University, New York, United States","Harvard University, Boston, United States"],"email":"rk4815@nyu.edu","is_corresponding":false,"name":"Robert Kr\u00fcger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Eric M\u00f6rth"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1615","time_end":"","time_stamp":"","time_start":"","title":"Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data","uid":"v-full-1615","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1626":{"abstract":"We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures---often at multiple spatial or semantic scales---across various application domains and requiring diverse visual representations for effective visualization. To understand user reactions to our new environment, we began with an elicitation user study, in which we captured their responses and interactions. We observed that users adapted their interaction approaches based on perceived visual representations, with natural transitions in spatial awareness and actions while navigating across the physical surface. Our findings then informed the development of a design space for spatial data exploration in cross-reality. We thus developed cross-reality environments tailored to three distinct domains: for 3D molecular structure data, for 3D point cloud data, and for 3D anatomical data. In particular, we designed interaction techniques that account for the inherent features of interactions in both spaces, facilitating various forms of interaction including mid-air gestures, touch interactions, pen interactions, and combinations thereof to enhance the users' sense of presence and engagement. We assessed the usability of our environment with biologists, focusing on its use for domain research. In addition, we evaluated our interaction transition designs with virtual and mixed-reality experts to gather further insights. As a result, we provide our design suggestions for the cross-reality environment, emphasizing the interaction with diverse visual representations and seamless interaction transitions between 2D and 3D spaces.","accessible_pdf":false,"authors":[{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"lixiang.zhao17@student.xjtlu.edu.cn","is_corresponding":false,"name":"Lixiang Zhao"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"fuqi.xie20@student.xjtlu.edu.cn","is_corresponding":false,"name":"Fuqi Xie"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"hainingliang@hkust-gz.edu.cn","is_corresponding":false,"name":"Hai-Ning Liang"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"lingyun.yu@xjtlu.edu.cn","is_corresponding":true,"name":"Lingyun Yu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lingyun Yu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1626","time_end":"","time_stamp":"","time_start":"","title":"SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality","uid":"v-full-1626","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1632":{"abstract":"High-dimensional data, characterized by many features, can be difficult to visualize effectively. Dimensionality reduction techniques, such as PCA, UMAP, and t-SNE, address this challenge by projecting the data into a lower-dimensional space while preserving important relationships. TopoMap is another technique that excels at preserving the underlying structure of the data, leading to interpretable visualizations. In particular, TopoMap maps the high-dimensional data into a visual space, guaranteeing that the 0-dimensional persistence diagram of the Rips filtration of the visual space matches the one from the high-dimensional data. However, the original Topomap algorithm can be slow and its layout can be too sparse for large and complex datasets. In this paper, we propose three improvements to TopoMap: 1) a more space-efficient layout, 2) a significantly faster implementation, and 3) a novel treemap-based representation to aid the exploration of the projections. These advancements make TopoMap, now referred to as TopoMap++, a more powerful tool for visualizing high-dimensional data, similar to how t-SNE surpassed SNE in popularity.","accessible_pdf":false,"authors":[{"affiliations":["New York University, New York City, United States"],"email":"vitoriaguardieiro@gmail.com","is_corresponding":true,"name":"Vitoria Guardieiro"},{"affiliations":["New York University, New York City, United States"],"email":"felipedeoliveira1407@gmail.com","is_corresponding":false,"name":"Felipe Inagaki de Oliveira"},{"affiliations":["Microsoft Research India, Bangalore, India"],"email":"harish.doraiswamy@microsoft.com","is_corresponding":false,"name":"Harish Doraiswamy"},{"affiliations":["University of Sao Paulo, Sao Carlos, Brazil"],"email":"gnonato@icmc.usp.br","is_corresponding":false,"name":"Luis Gustavo Nonato"},{"affiliations":["New York University, New York City, United States"],"email":"csilva@nyu.edu","is_corresponding":false,"name":"Claudio Silva"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Vitoria Guardieiro"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1632","time_end":"","time_stamp":"","time_start":"","title":"TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees","uid":"v-full-1632","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1638":{"abstract":"Probability density function (PDF) curves are among the few charts on a Cartesian coordinate system that are commonly presented without y-axes. This design decision may be due to the lack of relevance of vertical scaling in normal PDFs. In fact, as long as two normal PDFs have the same mean and standard deviations (SDs), they can be scaled to occupy different amounts of vertical space while still remaining statistically identical. Because unscaled PDF height increases as SD decreases, visualization designers may find themselves tempted to vertically shrink low-SD PDFs to avoid occlusion or save white space in their figures. While irregular vertical scaling has been explored in bar and line charts, the visualization community has yet to investigate how this purely visual manipulation may affect reader comparisons of PDFs. In this paper, we present two preregistered quantitative experiments (n=600, n=401) that systematically demonstrate that vertical scaling can lead to misinterpretations of PDFs. We also test visual interventions to mitigate misinterpretation. In some contexts, we find that including a y-axis reduces this effect. Overall, we find that keeping vertical scaling consistent, and therefore maintaining equal pixel areas under PDF curves, results in the highest likelihood of accurate comparisons. Our findings provide the first insights into the impact of vertical scaling on PDFs, and reveal the complicated nature of proportional area comparisons.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"racquel.fygenson@gmail.com","is_corresponding":true,"name":"Racquel Fygenson"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Racquel Fygenson"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1638","time_end":"","time_stamp":"","time_start":"","title":"The Impact of Vertical Scaling on Normal Probability Density Function Plots","uid":"v-full-1638","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1642":{"abstract":"Despite the development of numerous visual analytics tools for event sequence data across various domains, including, but not limited to healthcare, digital marketing, and user behavior analysis, comparing these domain-specific investigations and transferring the results to new datasets and problem areas remain challenging. Task abstractions can help us go beyond domain-specific details, but existing visualization task abstractions are insufficient for event sequence visual analytics because they primarily focus on tabular datasets and often overlook automated analytical techniques. To address this gap, we propose a domain-agnostic multi-level task framework for event sequence analysis, derived from an analysis of 58 papers that present event sequence visualization systems. Our framework consists of four levels: objective, intent, strategy, and technique. Overall objectives identify the main goals of analysis. Intents comprises five high-level approaches adopted at each analysis step: augment data, simplify data, configure data, configure visualization, and create provenance. Each intent is accomplished through a number of strategies, for instance, data simplification can be achieved through aggregation, summarization, or segmentation. Finally, each strategy can be implemented by a set of techniques depending on the input and output components. We further show that techniques can be expressed through a quartet of action-input-output-criteria. We demonstrate the framework\u2019s power through mapping case studies and discuss its similarities and differences with previous event sequence task taxonomies.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, College Park, United States"],"email":"kzintas@umd.edu","is_corresponding":true,"name":"Kazi Tasnim Zinat"},{"affiliations":["University of Maryland, College Park, United States"],"email":"ssakhamu@terpmail.umd.edu","is_corresponding":false,"name":"Saimadhav Naga Sakhamuri"},{"affiliations":["University of Maryland, College Park, United States"],"email":"achen151@terpmail.umd.edu","is_corresponding":false,"name":"Aaron Sun Chen"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kazi Tasnim Zinat"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1642","time_end":"","time_stamp":"","time_start":"","title":"A Multi-Level Task Framework for Event Sequence Analysis","uid":"v-full-1642","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1681":{"abstract":"In recent years, the global adoption of electric vehicles (EVs) has surged, prompting a corresponding rise in the installation of charging stations. This proliferation has underscored the importance of expediting the deployment of charging infrastructure. Both academia and industry have thus devoted to addressing the charging station location problem (CSLP) to streamline this process. However, prevailing algorithms addressing CSLP are hampered by restrictive assumptions and computational overhead, leading to a dearth of comprehensive evaluations in the spatiotemporal dimensions. Consequently, their practical viability is restricted. Moreover, the placement of charging stations exerts a significant impact on both the road network and the power grid, which necessitates the evaluation of the potential post-deployment impacts on these interconnected networks holistically. In this study, we propose CSLens, a visual analytics system designed to inform charging station deployment decisions through the lens of coupled transportation and power networks. CSLens offers multiple visualizations and interactive features, empowering users to delve into the existing charging station layout, explore alternative deployment solutions, and assess the ensuring impact. To validate the efficacy of CSLens, we conducted two case studies and engaged in interviews with domain experts. Through these efforts, we substantiated the usability and practical utility of CSLens in enhancing the decision-making process surrounding charging station deployment. Our findings underscore CSLens\u2019s potential to serve as a valuable asset in navigating the complexities of charging infrastructure planning.","accessible_pdf":false,"authors":[{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"zhangyt85@mail2.sysu.edu.cn","is_corresponding":false,"name":"Yutian Zhang"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"xulw8@mail2.sysu.edu.cn","is_corresponding":false,"name":"Liwen Xu"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"taoshc@mail2.sysu.edu.cn","is_corresponding":false,"name":"Shaocong Tao"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"guanqx3@mail.sysu.edu.cn","is_corresponding":false,"name":"Quanxue Guan"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"zenghp5@mail.sysu.edu.cn","is_corresponding":true,"name":"Haipeng Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haipeng Zeng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1681","time_end":"","time_stamp":"","time_start":"","title":"CSLens: Towards Better Deploying Charging Stations via Visual Analytics \u2014\u2014 A Coupled Networks Perspective","uid":"v-full-1681","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1693":{"abstract":"We introduce a visual analysis method for multiple causality graphs with different outcome variables, namely, multi-outcome causality graphs. Multi-outcome causality graphs are important in healthcare for understanding multimorbidity and comorbidity. To support the visual analysis, we collaborated with medical experts to devise two comparative visualization techniques at different stages of the analysis process. First, a progressive visualization method is proposed for comparing multiple state-of-the-art causal discovery algorithms. The method can handle mixed-type datasets comprising both continuous and categorical variables and assist in the creation of a fine-tuned causality graph of a single outcome. Second, a comparative graph layout technique and specialized visual encodings are devised for the quick comparison of multiple causality graphs. In our visual analysis approach, analysts start by building individual causality graphs for each outcome variable, and then, multi-outcome causality graphs are generated and visualized with our comparative technique for analyzing differences and commonalities of these causality graphs. Evaluation includes quantitative measurements on benchmark datasets, a case study with a medical expert, and expert user studies with real-world health research data.","accessible_pdf":false,"authors":[{"affiliations":["Institute of Medical Technology, Peking University Health Science Center, Beijing, China","National Institute of Health Data Science, Peking University, Beijing, China"],"email":"mengjiefan@bjmu.edu.cn","is_corresponding":true,"name":"Mengjie Fan"},{"affiliations":["Beihang University, Beijing, China","Peking University, Beijing, China"],"email":"yu.jinlu@qq.com","is_corresponding":false,"name":"Jinlu Yu"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"},{"affiliations":["Tongji College of Design and Innovation, Shanghai, China"],"email":"nan.cao@gmail.com","is_corresponding":false,"name":"Nan Cao"},{"affiliations":["Beijing University of Chinese Medicine, Beijing, China"],"email":"wanghuaiyuelva@126.com","is_corresponding":false,"name":"Huaiyu Wang"},{"affiliations":["Peking University, Beijing, China"],"email":"zhoulng@pku.edu.cn","is_corresponding":false,"name":"Liang Zhou"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mengjie Fan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1693","time_end":"","time_stamp":"","time_start":"","title":"Visual Analysis of Multi-outcome Causal Graphs","uid":"v-full-1693","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1699":{"abstract":"Room-scale immersive data visualisations provide viewers a wide-scale overview of a large dataset, but to interact precisely with individual data points they typically have to navigate to change their point of view. In traditional screen-based visualisations, focus-and-context techniques allow visualisation users to keep a full dataset in view while making detailed selections. Such techniques have been studied extensively on desktop to allow precise selection within large data sets, but they have not been explored in immersive 3D modalities. In this paper we develop a novel immersive focus-and-context technique based on a ``magic portal'' metaphor adapted specifically for data visualisation scenarios. An extendable-hand interaction technique is used to place a portal close to the region of interest. The other end of the portal then opens comfortably within the user's physical reach such that they can reach through to precisely select individual data points. Through a controlled study with 24 participants, we find strong evidence that portals reduce overshoots in selection and overall hand trajectory length, reducing arm fatigue compared to ranged interaction without the portal. The portals also enable us to use a robot arm to provide haptic feedback for data within the limited volume of the portal region. We demonstrate applications for portal-based selection through two use-case scenarios.","accessible_pdf":false,"authors":[{"affiliations":["Monash University, Melbourne, Australia"],"email":"dai.shaozhang@gmail.com","is_corresponding":true,"name":"Shaozhang Dai"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"yi.li5@monash.edu","is_corresponding":false,"name":"Yi Li"},{"affiliations":["The University of British Columbia (Okanagan Campus), Kelowna, Canada"],"email":"barrett.ens@ubc.ca","is_corresponding":false,"name":"Barrett Ens"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":false,"name":"Lonni Besan\u00e7on"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"tgdwyer@gmail.com","is_corresponding":false,"name":"Tim Dwyer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shaozhang Dai"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1699","time_end":"","time_stamp":"","time_start":"","title":"Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context","uid":"v-full-1699","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1705":{"abstract":"Contour trees describe the topology of level sets in scalar fields and are widely used in topological data analysis and visualization. A main challenge for utilizing contour trees for large-scale scientific data is their computation at scale using high-performance computing. To address this challenge, recent work has introduced distributed hierarchical contour trees for distributed computation and storage of contour trees. However, effective use of these distributed structures in analysis and visualization requires subsequent computation of geometric properties and branch decomposition to support contour extraction and exploration. In this work, we introduce distributed algorithms for augmentation, hypersweeps, and branch decomposition that enable parallel computation of geometric properties, and support the use of distributed contour trees as a query structure for scientific exploration. We evaluate the parallel performance of these algorithms and apply them to identify and extract important contours for scientific visualization.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"mingzhefluorite@gmail.com","is_corresponding":true,"name":"Mingzhe Li"},{"affiliations":["University of Leeds, Leeds, United Kingdom"],"email":"h.carr@leeds.ac.uk","is_corresponding":false,"name":"Hamish Carr"},{"affiliations":["Lawrence Berkeley National Laboratory, Berkeley, United States"],"email":"oruebel@lbl.gov","is_corresponding":false,"name":"Oliver R\u00fcbel"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"},{"affiliations":["Lawrence Berkeley National Laboratory, Berkeley, United States"],"email":"ghweber@lbl.gov","is_corresponding":false,"name":"Gunther H Weber"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mingzhe Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1705","time_end":"","time_stamp":"","time_start":"","title":"Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration","uid":"v-full-1705","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1708":{"abstract":"The widespread use of Deep Neural Networks (DNNs) has recently resulted in their application to challenging scientific visualization tasks. While advanced DNNs demonstrate impressive generalization abilities, understanding factors like prediction quality, confidence, robustness, and uncertainty is crucial. These insights aid application scientists in making informed decisions. However, DNNs lack inherent mechanisms to measure prediction uncertainty, prompting the creation of distinct frameworks for constructing robust uncertainty-aware models tailored to various visualization tasks. In this work, we develop uncertainty-aware implicit neural representations to model steady-state vector fields effectively. We comprehensively evaluate the efficacy of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout, aimed at enabling uncertainty-informed visual analysis of features within steady vector field data. Our detailed exploration using several vector data sets indicate that uncertainty-aware models generate informative visualization results of vector field features. Furthermore, incorporating prediction uncertainty improves the resilience and interpretability of our DNN model, rendering it applicable for the analysis of complex vector field data sets.","accessible_pdf":false,"authors":[{"affiliations":["Indian Institute of Technology Kanpur , Kanpur, India"],"email":"atulkrfcb@gmail.com","is_corresponding":false,"name":"Atul Kumar"},{"affiliations":["Indian Institute of Technology Kanpur , Kanpur , India"],"email":"gsiddharth2209@gmail.com","is_corresponding":false,"name":"Siddharth Garg"},{"affiliations":["Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India"],"email":"soumya.cvpr@gmail.com","is_corresponding":true,"name":"Soumya Dutta"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Soumya Dutta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1708","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data","uid":"v-full-1708","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1726":{"abstract":"User experience in data visualization is typically assessed through post-viewing self-reports, but these overlook the dynamic cognitive processes during interaction. This study explores the use of mind wandering as a dynamic measure during visualization exploration. Participants reported mind wandering while viewing visualizations from a pre-labeled visualization database and then provided quantitative ratings of trust, engagement, and design quality, along with qualitative descriptions and short-term/long-term recall assessments. Results show that mind wandering negatively affects short-term visualization recall and various post-viewing measures, particularly for visualizations with little text annotation. Further, the type of mind wandering impacts engagement and emotional response. Mind wandering also acts as a serial mediator between visualization design elements and post-viewing measures. Overall, this research underscores the importance of incorporating mind wandering as a dynamic measure in visualization design and evaluation, offering novel avenues for enhancing user engagement and comprehension.","accessible_pdf":false,"authors":[{"affiliations":["Arizona State University, Tempe, United States"],"email":"aarunku5@asu.edu","is_corresponding":true,"name":"Anjana Arunkumar"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"},{"affiliations":["Arizona State University, Tempe, United States"],"email":"cbryan16@asu.edu","is_corresponding":false,"name":"Chris Bryan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anjana Arunkumar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1726","time_end":"","time_stamp":"","time_start":"","title":"Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations","uid":"v-full-1726","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1730":{"abstract":"Understanding the input and output of data wrangling scripts is crucial for various tasks like debugging codes and onboarding new data. However, existing research on script understanding primarily focuses on revealing the process of data transformations, lacking the ability to analyze the potential scope, i.e., the space of script inputs and outputs. Meanwhile, constructing input/output space during script analysis is challenging, as the wrangling scripts could be semantically complex and diverse, and the association between different data objects is intricate. To facilitate data workers in understanding the input and output spaces of wrangling scripts, we summarize ten types of constraints to express table spaces, and build a mapping between data transformations and these constraints to guide the construction of the input/output for individual transformations. Then, we propose a constraint generation model for integrating table constraints across multiple transformations. Based on the model, we develop Ferry, an interactive system that extracts and visualizes the data constraints describing the input and output spaces of data wrangling scripts, thereby enabling users to grasp the high-level semantics of complex scripts and locate the origins of faulty data transformations. Besides, Ferry provides example input and output data to assist users in interpreting the extracted constraints, checking and resolving the conflicts between these constraints and any uploaded dataset. Ferry's effectiveness and usability are evaluated via a usage scenario and two case studies: the first assists users in onboarding new data and debugging scripts, while the second verifies input-output compatibility across data processing modules. Furthermore, an illustrative application is presented to demonstrate Ferry's flexibility.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"rickyluozs@gmail.com","is_corresponding":true,"name":"Zhongsu Luo"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"kaixiong@zju.edu.cn","is_corresponding":false,"name":"Kai Xiong"},{"affiliations":["Zhejiang University, Hangzhou,Zhejiang, China"],"email":"3220105578@zju.edu.cn","is_corresponding":false,"name":"Jiajun Zhu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"chenran928@zju.edu.cn","is_corresponding":false,"name":"Ran Chen"},{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":["Zhejiang University, Ningbo, China"],"email":"dweng@zju.edu.cn","is_corresponding":false,"name":"Di Weng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zhongsu Luo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1730","time_end":"","time_stamp":"","time_start":"","title":"Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts","uid":"v-full-1730","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1738":{"abstract":"As a step towards improving visualization literacy, we investigated how students approach reading visualizations differently after taking a university-level visualization course. We asked students to verbally walk through their process of making sense of unfamiliar visualizations, and conducted a qualitative analysis of these walkthroughs. Our qualitative analysis found changes in students' walkthroughs consistent with explicit learning goals of visualization courses. After taking a visualization course, students also engaged with visualizations in more sophisticated ways not fully captured by explicit learning goals: they were more likely to exhibit design empathy by thinking critically about the tradeoffs behind why a chart was designed in a particular way, and were better able to deconstruct a chart to make sense of it. We also gave students a quantitative assessment of visualization literacy and found no evidence of scores improving after the class, likely because the test we used focused on a different set of skills than those emphasized in visualization classes. While current measurement instruments for visualization literacy are useful, we propose developing standardized assessments for additional aspects of visualization literacy, such as deconstruction and design empathy. We also suggest those additional aspects could be made more explicit in learning goals set by visualization educators. All supplemental materials are available at https://osf.io/w5pum/?view_only=f9eca3fa4711425582d454031b9c482e.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"maryam.hedayati@u.northwestern.edu","is_corresponding":true,"name":"Maryam Hedayati"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maryam Hedayati"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1738","time_end":"","time_stamp":"","time_start":"","title":"What University Students Learn In Visualization Classes","uid":"v-full-1738","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1746":{"abstract":"Hypergraphs provide a natural way to represent polyadic relationships in network data. For large hypergraphs, it is often difficult to visually detect structures within the data. Recently, a scalable polygon-based visualization framework was developed allowing hypergraphs with thousands of hyperedges to be simplified and examined at different levels of detail. However, this approach does not consider structures such as cycles, bridges, and branches. Consequently, structures can be lost at simplified scales, making interpretations for real-world applications unreliable. In this paper, we define hypergraph structures using the bipartite graph representation. Powered by our analysis, we provide an algorithm to decompose large hypergraphs into meaningful features and to identify regions of non-planarity. We also introduce a set of topology preserving and topology altering atomic operations, enabling the preservation of important structures while removing topological noise in simplified scales. We demonstrate our approach in several real-world applications.","accessible_pdf":false,"authors":[{"affiliations":["Oregon State University, Corvallis, United States"],"email":"oliverpe@oregonstate.edu","is_corresponding":false,"name":"Peter D Oliver"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhange@eecs.oregonstate.edu","is_corresponding":true,"name":"Eugene Zhang"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhangyue@oregonstate.edu","is_corresponding":false,"name":"Yue Zhang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Eugene Zhang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1746","time_end":"","time_stamp":"","time_start":"","title":"Structure-Aware Simplification for Hypergraph Visualization","uid":"v-full-1746","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1770":{"abstract":"The semantic similarity between documents of a text corpus can be visualized using map-like metaphors based on two-dimensional scatterplot layouts. These layouts result from a dimensionality reduction on the document-term matrix or a representation within a latent embedding, including topic models. Thereby, the resulting layout depends on the input data and hyperparameters of the dimensionality reduction and is therefore affected by changes in them. Furthermore, the resulting layout is affected by changes in the input data and hyperparameters of the dimensionality reduction. However, such changes to the layout require additional cognitive efforts from the user. In this work, we present a sensitivity study that analyzes the stability of these layouts concerning (1) changes in the text corpora, (2) changes in the hyperparameter, and (3) randomness in the initialization. Our approach has two stages: data measurement and data analysis. First, we derived layouts for the combination of three text corpora and six text embeddings and a grid-search-inspired hyperparameter selection of the dimensionality reductions. Afterward, we quantified the similarity of the layouts through ten metrics, concerning local and global structures and class separation. Second, we analyzed the resulting 42817 tabular data points in a descriptive statistical analysis. From this, we derived guidelines for informed decisions on the layout algorithm and highlight specific hyperparameter settings. We provide our implementation and results as a Git repository at https://github.com/hpicgs/Topic-Models-and-Dimensionality-Reduction-Sensitivity-Study .","accessible_pdf":false,"authors":[{"affiliations":["University of Potsdam, Digital Engineering Faculty, Hasso Plattner Institute, Potsdam, Germany"],"email":"daniel.atzberger@hpi.de","is_corresponding":true,"name":"Daniel Atzberger"},{"affiliations":["University of Potsdam, Potsdam, Germany"],"email":"tcech@uni-potsdam.de","is_corresponding":false,"name":"Tim Cech"},{"affiliations":["Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany"],"email":"willy.scheibel@hpi.de","is_corresponding":false,"name":"Willy Scheibel"},{"affiliations":["Hasso Plattner Institute"],"email":"juergen.doellner@hpi.de","is_corresponding":false,"name":"J\u00fcrgen D\u00f6llner"},{"affiliations":["Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany"],"email":"m.behrisch@uu.nl","is_corresponding":false,"name":"Michael Behrisch"},{"affiliations":["Utrecht University, Utrecht, Netherlands"],"email":"tobias.schreck@cgv.tugraz.at","is_corresponding":false,"name":"Tobias Schreck"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Atzberger"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1770","time_end":"","time_stamp":"","time_start":"","title":"A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations","uid":"v-full-1770","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1793":{"abstract":"This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale complexes, and the discrepancies in topology between original and decompressed datasets could potentially result in erroneous interpretations or even incorrect scientific conclusions. In this paper, we focus on preserving Morse-Smale segmentations in 2D/3D piecewise linear scalar fields, targeting the precise reconstruction of minimum/maximum labels induced by the integral curve of each vertex. The key is to derive a series of edits during compression time; the edits are applied to the decompressed data, leading to an accurate reconstruction of segmentations while keeping the error within the prescribed error bound. To this end, we developed a workflow to fix extrema and integral curves alternatively until convergence within finite iterations; we accelerate each workflow component with shared-memory/GPU parallelism to make the performance practical for coupling with compressors. We demonstrate use cases with fluid dynamics, ocean, and cosmology application datasets with a 1000x acceleration with an NVIDIA A100 GPU.","accessible_pdf":false,"authors":[{"affiliations":["The Ohio State University, Columbus, United States"],"email":"li.14025@osu.edu","is_corresponding":true,"name":"Yuxiao Li"},{"affiliations":["University of California, Riverside, Riverside, United States"],"email":"xlian007@ucr.edu","is_corresponding":false,"name":"Xin Liang"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"qiu.722@osu.edu","is_corresponding":false,"name":"Yongfeng Qiu"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"lyan@anl.gov","is_corresponding":false,"name":"Lin Yan"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuxiao Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1793","time_end":"","time_stamp":"","time_start":"","title":"MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors","uid":"v-full-1793","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1802":{"abstract":"In the biomedical domain, visualizing the document embeddings of an extensive corpus has been widely used in information-seeking tasks. However, three key challenges with existing visualizations make it difficult for clinicians to find information efficiently. First, the document embeddings used in these visualizations are generated statically by pretrained language models, which cannot adapt to the user's evolving interest. Second, existing document visualization techniques cannot effectively display how the documents are relevant to users\u2019 interest, making it difficult for users to identify the most pertinent information. Third, existing embedding generation and visualization processes suffer from a lack of interpretability, making it difficult to understand, trust and use the result for decision-making. In this paper, we present a novel visual analytics pipeline for user-driven document representation and iterative information seeking (VADIS). VADIS introduces a prompt-based attention model (PAM) that generates dynamic document embedding and document relevance adjusted to the user's query. To effectively visualize these two pieces of information, we design a new document map that leverages a circular grid layout to display documents based on both their relevance to the query and the semantic similarity. Additionally, to improve the interpretability, we introduce a corpus-level attention visualization method to improve the user's understanding of the model focus and to enable the users to identify potential oversight. This visualization, in turn, empowers users to refine, update and introduce new queries, thereby facilitating a dynamic and iterative information-seeking experience. We evaluated VADIS quantitatively and qualitatively on a real-world dataset of biomedical research papers to demonstrate its effectiveness.","accessible_pdf":false,"authors":[{"affiliations":["Ohio State University, Columbus, United States"],"email":"qiu.580@buckeyemail.osu.edu","is_corresponding":true,"name":"Rui Qiu"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"tu.253@osu.edu","is_corresponding":false,"name":"Yamei Tu"},{"affiliations":["Washington University School of Medicine in St. Louis, St. Louis, United States"],"email":"yenp@wustl.edu","is_corresponding":false,"name":"Po-Yin Yen"},{"affiliations":["The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Rui Qiu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1802","time_end":"","time_stamp":"","time_start":"","title":"VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking","uid":"v-full-1802","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1803":{"abstract":"Scalar field comparison is a fundamental task in scientific visualization. In topological data analysis, we compare topological descriptors of scalar fields---such as persistence diagrams and merge trees---as they provide succinct and robust abstract representations. While several similarity measures for topological descriptors seem to be both asymptotically and practically efficient with polynomial time algorithms, they do not scale well when handling large-scale, time-varying scientific data and ensembles. In this paper, we propose a new framework to facilitate the comparative analysis of merge trees, inspired by tools from locality sensitive hashing (LSH). LSH hashes similar objects into the same hash buckets with high probability. We propose two new similarity measures for merge trees that can be computed via LSH, using new extensions to Recursive MinHash and subpath signature, respectively. Our similarity measures are extremely efficient to compute and closely resemble the results of existing measures such as merge tree edit distance or geometric interleaving distance. Our experiments demonstrate the utility of our LSH framework in applications such as shape matching, clustering, key event detection, and ensemble summarization.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, SALT LAKE CITY, United States"],"email":"lyuweiran@gmail.com","is_corresponding":false,"name":"Weiran Lyu"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"g.s.raghavendra@gmail.com","is_corresponding":true,"name":"Raghavendra Sridharamurthy"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jeffp@cs.utah.edu","is_corresponding":false,"name":"Jeff M. Phillips"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Raghavendra Sridharamurthy"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1803","time_end":"","time_stamp":"","time_start":"","title":"Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing","uid":"v-full-1803","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1805":{"abstract":"he optimization of cooling systems is important in many cases, for example for cabin and battery cooling in electric cars. Such an optimization is governed by multiple, conflicting objectives and it is performed across a multi-dimensional parameter space. The extent of the parameter space, the complexity of the non-linear model of the system, as well as the time needed per simulation run and factors that are not modeled in the simulation necessitate an iterative, semi-automatic approach. We present an interactive visual optimization approach, where the user works with a p-h diagram to steer an iterative, guided optimization process. A deep learning (DL) model provides estimates for parameters, given a target characterization of the system, while numerical simulation is used to predict system characteristics for an ensemble of parameter sets. Since the DL model only serves as an approximation of the inverse of the cooling system and since target characteristics can be chosen according to different, competing objectives, an iterative optimization process is realized, developing multiple sets of intermediate solutions, which are visually related to each other. The standard p-h diagram, integrated interactively in this approach, is complemented by a dual, also interactive visual representation of additional expressive measures representing the system characteristics. We show how the known four-points semantic of the p-h diagram meaningfully transfers to the dual data representation. When evaluating this approach with our partners in the automotive domain, we found that our solution helped with the overall comprehension of the cooling system and that it lead to a faster convergence during optimization.","accessible_pdf":false,"authors":[{"affiliations":["VRVis Research Center, Vienna, Austria"],"email":"splechtna@vrvis.at","is_corresponding":false,"name":"Rainer Splechtna"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"behravan@vt.edu","is_corresponding":false,"name":"Majid Behravan"},{"affiliations":["AVL AST doo, Zagreb, Croatia"],"email":"mario.jelovic@avl.com","is_corresponding":false,"name":"Mario Jelovic"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"gracanin@vt.edu","is_corresponding":false,"name":"Denis Gracanin"},{"affiliations":["University of Bergen, Bergen, Norway"],"email":"helwig.hauser@uib.no","is_corresponding":false,"name":"Helwig Hauser"},{"affiliations":["VRVis Research Center, Vienna, Austria"],"email":"matkovic@vrvis.at","is_corresponding":true,"name":"Kresimir Matkovic"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kresimir Matkovic"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1805","time_end":"","time_stamp":"","time_start":"","title":"Interactive Design-of-Experiments: Optimizing a Cooling System","uid":"v-full-1805","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1809":{"abstract":"Visualizing relational data is crucial for understanding complex connections between entities in social networks, political affiliations, or biological interactions. Well-known representations like node-link diagrams and adjacency matrices offer valuable insights, but their effectiveness relies on the ability to identify patterns in the underlying topological structure. Reordering strategies and layout algorithms play a vital role in the visualization process since the arrangement of nodes, edges, or cells influences the visibility of these patterns. The BioFabric visualization combines elements of node-link diagrams and adjacency matrices, leveraging the strengths of both, the visual clarity of node-link diagrams and the tabular organization of adjacency matrices. A unique characteristic of BioFabric is the possibility to reorder nodes and edges separately. This raises the question of which combination of layout algorithms best reveals certain patterns. In this paper, we discuss patterns and anti-patterns in BioFabric, such as staircases or escalators, relate them to already established patterns, and propose metrics to evaluate their quality. Based on these quality metrics, we compared combinations of well-established reordering techniques applied to BioFabric with a well-known benchmark data set. Our experiments indicate that the edge order has a stronger influence on revealing patterns than the node layout. The results show that the best combination for revealing staircases is a barycentric node layout, together with an edge order based on node indices and length. Our research contributes a first building block for many promising future research directions, which we also share and discuss. A free copy of this paper and all supplemental materials are available at OSF.","accessible_pdf":false,"authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"fuchs@dbvis.inf.uni-konstanz.de","is_corresponding":true,"name":"Johannes Fuchs"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"alexander.frings@uni-konstanz.de","is_corresponding":false,"name":"Alexander Frings"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"maria-viktoria.heinle@uni-konstanz.de","is_corresponding":false,"name":"Maria-Viktoria Heinle"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"sara.di-bartolomeo@uni-konstanz.de","is_corresponding":false,"name":"Sara Di Bartolomeo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Johannes Fuchs"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1809","time_end":"","time_stamp":"","time_start":"","title":"Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations","uid":"v-full-1809","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1810":{"abstract":"Classical bibliography, by scrutinizing preserved catalogs from both official archives and personal collections of accumulated books, examines the books throughout history, thereby elucidating cultural development across historical periods. In this work, we collaborate with domain experts to accomplish the task of data annotation concerning Chinese ancient catalogs. We introduce the CataAnno system that facilitates users in completing annotations more efficiently through cross-linked views, recommendation methods and convenient annotation interactions. The recommendation method can learn the background knowledge and annotation patterns that experts subconsciously integrate into the data during prior annotation processes. CataAnno searches for the most relevant examples previously annotated and recommends to the user. Meanwhile, the cross-linked views assist users in comprehending the correlations between entries and offer explanations for these recommendations. Evaluation and expert feedback confirm that the CataAnno system, by offering high-quality recommendations and visualizing the relationships between entries, can mitigate the necessity for specialized knowledge during the annotation process. This results in enhanced accuracy and consistency in annotations, thereby enhancing the overall efficiency","accessible_pdf":false,"authors":[{"affiliations":["Peking University, Beijing, China"],"email":"hanning.shao@pku.edu.cn","is_corresponding":true,"name":"Hanning Shao"},{"affiliations":["Peking University, Beijing, China"],"email":"xiaoru.yuan@pku.edu.cn","is_corresponding":false,"name":"Xiaoru Yuan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hanning Shao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1810","time_end":"","time_stamp":"","time_start":"","title":"CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation","uid":"v-full-1810","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1830":{"abstract":"Over the past decade, several urban visual analytics systems have been proposed to tackle a host of challenges faced by cities, in areas as diverse as transportation, weather, and real estate. Many of these systems have been designed through engagement with urban experts, aiming to distill intricate urban analysis workflows into interactive visualizations and interfaces. The design, implementation, and practical use of these systems, however, still rely on siloed approaches that lead to bespoke tools that are hard to reproduce and extend. At the design level, these systems undervalue rich data workflows from urban experts by usually only treating them as data providers and evaluators. At the implementation level, these systems lack interoperability with other technical frameworks. At the practical use level, these systems tend to be narrowly focused on specific fields, inadvertently creating barriers for cross-domain collaboration. To tackle these gaps, we present Curio, a framework for collaborative urban visual analytics. Curio uses a dataflow model with multiple abstraction levels (code, grammar, GUI elements) to facilitate collaboration across the design and implementation of visual analytics components. The framework allows experts to intertwine preprocessing, managing, and visualization stages while tracking provenance of code and visualizations. In collaboration with urban experts, we evaluate Curio through a diverse series of use cases targeting urban accessibility, urban microclimate, and sunlight access. These cases use different types of urban data and domain methodologies to illustrate Curio's flexibility in tackling pressing societal challenges.","accessible_pdf":false,"authors":[{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"gmorei3@uic.edu","is_corresponding":false,"name":"Gustavo Moreira"},{"affiliations":["Massachusetts Institute of Technology , Somerville, United States"],"email":"maryamh@mit.edu","is_corresponding":false,"name":"Maryam Hosseini"},{"affiliations":["University of Illinois Urbana-Champaign, Urbana-Champaign, United States"],"email":"carolinavfs@id.uff.br","is_corresponding":false,"name":"Carolina Veiga Ferreira de Souza"},{"affiliations":["Universidade Federal Fluminense, Niteroi, Brazil"],"email":"lucasalexandre.s.cc@gmail.com","is_corresponding":false,"name":"Lucas Alexandre"},{"affiliations":["Politecnico di Milano, Milano, Italy"],"email":"nicola.colaninno@polimi.it","is_corresponding":false,"name":"Nicola Colaninno"},{"affiliations":["Universidade Federal Fluminense, Niter\u00f3i, Brazil"],"email":"danielcmo@ic.uff.br","is_corresponding":false,"name":"Daniel de Oliveira"},{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"nivan@cin.ufpe.br","is_corresponding":false,"name":"Nivan Ferreira"},{"affiliations":["Universidade Federal Fluminense , Niteroi, Brazil"],"email":"mlage@ic.uff.br","is_corresponding":false,"name":"Marcos Lage"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"fabiom@uic.edu","is_corresponding":true,"name":"Fabio Miranda"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fabio Miranda"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1830","time_end":"","time_stamp":"","time_start":"","title":"Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics","uid":"v-full-1830","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1831":{"abstract":"When using exploratory visual analysis to examine multivariate hierarchical data, users often need to query data to narrow down the scope of analysis. However, formulating effective query expressions remains a challenge for multivariate hierarchical data, particularly when datasets become very large. To address this issue, we develop a declarative grammar, HiRegEx (Hierarchical data Regular Expression), for querying and exploring multivariate hierarchical data. Rooted in the extended multi-level task topology framework for tree visualizations (e-MLTT), HiRegEx delineates three query targets (node, path, and subtree) and two aspects for querying these targets (features and positions), and uses operators developed based on classical regular expressions for query construction. We develop a prototype system, TreeQueryER, to integrate an exploratory framework for querying and exploring multivariate hierarchical data based on HiRegEx. The exploratory framework includes three major components: top-down pattern specification, bottom-up data-driven inquiry, and context-creation data overview. We validate the expressiveness of HiRegEx with the tasks from the e-MLTT framework and showcase its utility and effectiveness through a usage scenario involving expert users in the analysis of a citation tree dataset.","accessible_pdf":false,"authors":[{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"guozhg.li@gmail.com","is_corresponding":true,"name":"Guozheng Li"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"haotian.mi1@gmail.com","is_corresponding":false,"name":"haotian mi"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"liuchi02@gmail.com","is_corresponding":false,"name":"Chi Harold Liu"},{"affiliations":["Ochanomizu University, Tokyo, Japan"],"email":"itot@is.ocha.ac.jp","is_corresponding":false,"name":"Takayuki Itoh"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"wanggrbit@126.com","is_corresponding":false,"name":"Guoren Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Guozheng Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1831","time_end":"","time_stamp":"","time_start":"","title":"HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data","uid":"v-full-1831","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1833":{"abstract":"The concept of an intelligent augmented reality (AR) assistant has applications as significant as they are wide-ranging, with potential uses in medicine, military endeavors, and mechanics. Such an assistant must be able to perceive the performer\u2019s environment and actions, reason about the state of the environment in relation to a given task, and seamlessly interact with the performer. These interactions typically involve an AR headset equipped with a variety of sensors which capture video, audio, and haptic feedback. Previous works have sought to facilitate the development of such an assistant by visualizing these sensor data streams as well as the machine learning model outputs that support an assistant\u2019s perception and reasoning capabilities. However, existing visual analytics systems do not include biometric data or focus on user modeling, and are only capable of visualizing a single task session for a single performer at a time. Furthermore, they mainly focus on traditional task analysis that typically assumes a linear progression from one step to the next. We propose a visual analytics system that allows users to compare performance during multiple task sessions focusing on non-linear tasks where different paths or sequences can lead to the successful completion of the task. In particular, we design visualizations for understanding user behavior through functional near-infrared spectroscopy (fNIRS) data as a proxy for perception, attention, and memory as well as corresponding motion data (acceleration, angular velocity, and eye gaze). We distill these insights into visual embeddings that allow users to easily select groups of sessions with similar behaviors. We provide case studies that explore how insights into task performance can be gleaned from these visualizations using data collected during helicopter copilot training tasks. Finally, we evaluate our approach by conducting an in-depth examination of a think-aloud experiment with five domain experts.","accessible_pdf":false,"authors":[{"affiliations":["New York University, New York, United States"],"email":"s.castelo@nyu.edu","is_corresponding":true,"name":"Sonia Castelo Quispe"},{"affiliations":["New York University, New York, United States"],"email":"jlrulff@gmail.com","is_corresponding":false,"name":"Jo\u00e3o Rulff"},{"affiliations":["New York University, Brooklyn, United States"],"email":"pss442@nyu.edu","is_corresponding":false,"name":"Parikshit Solunke"},{"affiliations":["New York University, New York, United States"],"email":"erin.mcgowan@nyu.edu","is_corresponding":false,"name":"Erin McGowan"},{"affiliations":["New York University, New York CIty, United States"],"email":"guandewu@nyu.edu","is_corresponding":false,"name":"Guande Wu"},{"affiliations":["New York University, Brooklyn, United States"],"email":"iran@ccrma.stanford.edu","is_corresponding":false,"name":"Iran Roman"},{"affiliations":["New York University, New York, United States"],"email":"rlopez@nyu.edu","is_corresponding":false,"name":"Roque Lopez"},{"affiliations":["New York University, Brooklyn, United States"],"email":"bs3639@nyu.edu","is_corresponding":false,"name":"Bea Steers"},{"affiliations":["New York University, New York, United States"],"email":"qisun@nyu.edu","is_corresponding":false,"name":"Qi Sun"},{"affiliations":["New York University, New York, United States"],"email":"jpbello@nyu.edu","is_corresponding":false,"name":"Juan Pablo Bello"},{"affiliations":["Northrop Grumman Mission Systems, Redondo Beach, United States"],"email":"bradley.feest@ngc.com","is_corresponding":false,"name":"Bradley S Feest"},{"affiliations":["Northrop Grumman, Aurora, United States"],"email":"michael.middleton@ngc.com","is_corresponding":false,"name":"Michael Middleton"},{"affiliations":["Northrop Grumman, Falls Church, United States"],"email":"ryan.mckendrick@ngc.com","is_corresponding":false,"name":"Ryan McKendrick"},{"affiliations":["New York University, New York City, United States"],"email":"csilva@nyu.edu","is_corresponding":false,"name":"Claudio Silva"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sonia Castelo Quispe"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1833","time_end":"","time_stamp":"","time_start":"","title":"HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems","uid":"v-full-1833","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1836":{"abstract":"Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Although shapes can be a finite number compared to colors, they can not be represented by a numerical space, making it difficult to propose a general guideline for shape choices or shed light on the design heuristics of designer-crafted shape palettes. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks -- relative mean judgment tasks, expert choices, and data correlation estimation. Given how complex and tangled results are, rather than relying on conventional features for modeling, we built a model and introduced a corresponding design tool that offers recommendations for shape encodings. The perceptual effectiveness of shapes significantly varies across specific pairs, and certain shapes may enhance perceptual efficiency and accuracy. However, how performance varies does not map well to classical features of shape such as angles, fill, or convex hull. We developed a model based on pairwise relations between shapes measured in our experiments and the number of shapes required to intelligently recommend shape palettes for a given design. This tool provides designers with agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances the understanding of shape perception in visualization contexts and provides practical design guidelines for advanced shape usage in visualization design that optimize perceptual efficiency.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"chint@cs.unc.edu","is_corresponding":true,"name":"Chin Tseng"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":false,"name":"Ghulam Jilani Quadri"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chin Tseng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1836","time_end":"","time_stamp":"","time_start":"","title":"An Empirically Grounded Approach for Designing Shape Palettes","uid":"v-full-1836","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1865":{"abstract":"In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics (IVD) consumables poses a significant threat to patients. Objective data-driven decision making on the severity of contamination is key for reducing risk to patients, while saving time and cost in the quality assessment process. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings of the current process are analysis problems, like weak support in exploring thousands of particle images, associated attributes, and ineffective knowledge externalization for sense-making. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study\u2019s learnings, and a generalizable approach for knowledge externalization. DaedalusData is a visual analytics system that empowers domain experts to explore particle contamination patterns, to label particles in label alphabets, and to externalize knowledge through semi-supervised label-informed data projections. The results of our case study show that DaedalusData supports experts in generating meaningful, comprehensive data overviews. Additionally, our user study evaluation shows high usability of DaedalusData and efficiently supports the labeling of large quantities of particles, and utilizes externalized knowledge to augment the dataset. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalabilty and trade-offs that come with the adoption of this approach in practice.","accessible_pdf":false,"authors":[{"affiliations":["University of Z\u00fcrich, Z\u00fcrich, Switzerland","Roche pRED, Basel, Switzerland"],"email":"alexander.wyss@protonmail.com","is_corresponding":true,"name":"Alexander Wyss"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"gab.morgenshtern@gmail.com","is_corresponding":false,"name":"Gabriela Morgenshtern"},{"affiliations":["Roche Diagnostics International, Rotkreuz, Switzerland"],"email":"a.hirschhuesler@gmail.com","is_corresponding":false,"name":"Amanda Hirsch-H\u00fcsler"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"bernard@ifi.uzh.ch","is_corresponding":false,"name":"J\u00fcrgen Bernard"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alexander Wyss"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1865","time_end":"","time_stamp":"","time_start":"","title":"DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study","uid":"v-full-1865","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1866":{"abstract":"Feature grid Scene Representation Networks (SRNs) have been applied to scientific data as compact functional surrogates for analysis and visualization. As SRNs are black-box lossy data representations, assessing the prediction quality is critical for scientific visualization applications to ensure that scientists can trust the information being visualized. Currently, existing architectures do not support inference time reconstruction quality assessment, as voxel-wise errors cannot be evaluated in the absence of ground truth data. By employing uncertain neural network architectures in feature grid SRNs, we obtain prediction variances during inference time to facilitate confidence-aware data reconstruction. Specifically, we propose a parameter-efficient multi-decoder Ensemble SRN (E-SRN) architecture consisting of a shared feature grid with multiple lightweight multi-layer perceptron decoders. E-SRN can generate a set of plausible predictions for a given input coordinate to compute the mean as the ensemble prediction and the variance as a confidence score. The voxel-wise variance can be rendered along with the data to inform the reconstruction quality, or be integrated into uncertainty-aware volume visualization algorithms. To prevent the misalignment between the quantified variance and the prediction quality, we propose a novel variance regularization loss for ensemble learning that promotes the Regularized Ensemble SRN (RE-SRN) to obtain a more reliable variance that correlates closely to the true model error. We comprehensively evaluate the quality of variance quantification and data reconstruction of Monte Carlo Dropout (MCD), Mean Field Variational Inference (MFVI), Deep Ensemble (DE), and Predicting Variance (PV) in comparison with our proposed E-SRN and RE-SRN applied to state-of-the-art feature grid SRNs across diverse scalar field datasets. We demonstrate that RE-SRN attains the most accurate data reconstruction and competitive variance-error correlation among uncertain SRNs under the same neural network parameter budgets. Furthermore, we present an adaptation of uncertainty-aware volume rendering and shed light on the potential of incorporating uncertain predictions in improving the quality of volume rendering for uncertain SRNs. Through ablation studies on the regularization strength and ensemble size, we show that E-SRN and RE-SRN are expected to perform sufficiently well with a default configuration without requiring customized hyperparameter settings for different datasets.","accessible_pdf":false,"authors":[{"affiliations":["The Ohio State University, Columbus, United States"],"email":"xiong.336@osu.edu","is_corresponding":true,"name":"Tianyu Xiong"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"wurster.18@osu.edu","is_corresponding":false,"name":"Skylar Wolfgang Wurster"},{"affiliations":["The Ohio State University, Columbus, United States","Argonne National Laboratory, Lemont, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"tpeterka@mcs.anl.gov","is_corresponding":false,"name":"Tom Peterka"},{"affiliations":["The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tianyu Xiong"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1866","time_end":"","time_stamp":"","time_start":"","title":"Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network","uid":"v-full-1866","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1874":{"abstract":"A layered network is an important category of graph in which every node is assigned to a layer and layers are drawn as parallel or radial lines. They are commonly used to display temporal data or hierarchical networks. Previous research has demonstrated that minimizing edge crossings is the most important criterion to consider when looking to improve the readability of such networks. While heuristic approaches exist for crossing minimization, we are interested in optimal approaches to the problem that prioritize human readability over computational scalability. We aim to improve the usefulness and applicability of such optimal methods by understanding and improving their scalability to larger graphs. This paper categorizes and evaluates the state-of-the-art linear programming formulations for exact crossing minimization and describes nine new and existing techniques that could plausibly accelerate the optimization algorithm. Through a computational evaluation, we explore each technique's effect on calculation time and how the techniques assist or inhibit one another, allowing researchers and practitioners to adapt them to the characteristics of their networks. Our best-performing techniques yielded a median improvement of 2.5--17x depending on the solver used, giving us the capability to create optimal layouts faster and for larger networks. We provide an open-source implementation of our methodology in Python, where users can pick which combination of techniques to enable according to their use case. A free copy of this paper and all supplemental materials, datasets used, and source code are available at {https://osf.io/}.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"wilson.conn@northeastern.edu","is_corresponding":true,"name":"Connor Wilson"},{"affiliations":["Northeastern University, Boston, United States"],"email":"eduardopuertac@gmail.com","is_corresponding":false,"name":"Eduardo Puerta"},{"affiliations":["northeastern university, Boston, United States"],"email":"turokhunter@gmail.com","is_corresponding":false,"name":"Tarik Crnovrsanin"},{"affiliations":["University of Konstanz, Konstanz, Germany","Northeastern University, Boston, United States"],"email":"sara.di-bartolomeo@uni-konstanz.de","is_corresponding":false,"name":"Sara Di Bartolomeo"},{"affiliations":["Northeastern University, Boston, United States"],"email":"c.dunne@northeastern.edu","is_corresponding":false,"name":"Cody Dunne"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Connor Wilson"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1874","time_end":"","time_stamp":"","time_start":"","title":"Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings","uid":"v-full-1874","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1880":{"abstract":"Merge trees are a valuable tool in scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the merge tree neural networks (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how graph neural networks (GNNs), which emerged as an effective encoder for graphs, can be trained to produce embeddings of merge trees in vector spaces that enable efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100x on the benchmark datasets while maintaining an error rate below 0.1%.","accessible_pdf":false,"authors":[{"affiliations":["Tulane University, New Orleans, United States"],"email":"yqin2@tulane.edu","is_corresponding":true,"name":"Yu Qin"},{"affiliations":["Montana State University, Bozeman, United States"],"email":"brittany.fasy@montana.edu","is_corresponding":false,"name":"Brittany Terese Fasy"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"cwenk@tulane.edu","is_corresponding":false,"name":"Carola Wenk"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"bsumma@tulane.edu","is_corresponding":false,"name":"Brian Summa"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yu Qin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1880","time_end":"","time_stamp":"","time_start":"","title":"Rapid and Precise Topological Comparison with Merge Tree Neural Networks","uid":"v-full-1880","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1917":{"abstract":"The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefits to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifier users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically \u201csee\u201d the chart content. They ascribed these challenges mainly to the magnification-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method significantly improved the usability of charts over both the status quo screen magnifier and a state-of-the-art space compaction-based solution.","accessible_pdf":false,"authors":[{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"yprak001@odu.edu","is_corresponding":true,"name":"Yash Prakash"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"pkhan002@odu.edu","is_corresponding":false,"name":"Pathan Aseef Khan"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"anaya001@odu.edu","is_corresponding":false,"name":"Akshay Kolgar Nayak"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"uksjayarathna@gmail.com","is_corresponding":false,"name":"Sampath Jayarathna"},{"affiliations":["Michigan State University, East Lansing, United States"],"email":"leehaena@msu.edu","is_corresponding":false,"name":"Hae-Na Lee"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"vganjigu@odu.edu","is_corresponding":false,"name":"Vikas Ashok"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yash Prakash"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1917","time_end":"","time_stamp":"","time_start":"","title":"Towards Enhancing Low Vision Usability of Data Charts on Smartphones","uid":"v-full-1917","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1040":{"abstract":"From dirty data to intentional deception, there are many threats to the validity of data-driven decisions. Making use of data, especially new or unfamiliar data, therefore requires a degree of trust or verification. How is this trust established? In this paper, we present the results of a series of interviews with both producers and consumers of data artifacts (outputs of data ecosystems like spreadsheets, charts, and dashboards) aimed at understanding strategies and obstacles to building trust in data. We find a recurring need, but lack of existing standards, for data validation and verification, especially among data consumers. We therefore propose a set of data guards: methods and tools for fostering trust in data artifacts.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"nicole.sultanum@gmail.com","is_corresponding":true,"name":"Nicole Sultanum"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"bromley.denny@gmail.com","is_corresponding":false,"name":"Dennis Bromley"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicole Sultanum"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1040","time_end":"","time_stamp":"","time_start":"","title":"Data Guards: Challenges and Solutions for Fostering Trust in Data","uid":"v-short-1040","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1047":{"abstract":"In the rapidly evolving field of deep learning, the traditional methodologies for designing deep learning models predominantly rely on code-based frameworks. While these approaches provide flexibility, they also create a significant barrier to entry for non-experts and obscure the immediate impact of architectural decisions on model performance. In response to this challenge, recent no-code approaches have been developed with the aim of enabling easy model development through graphical interfaces. However, both traditional and no-code methodologies share a common limitation that the inability to predict model outcomes or identify issues without executing the model. To address this limitation, we introduce an intuitive visual feedback-based no-code approach to visualize and analyze deep learning models during the design phase. This approach utilizes dataflow-based visual programming with dynamic visual encoding of model architecture. A user study was conducted with deep learning developers to demonstrate the effectiveness of our approach in enhancing the model design process, improving model understanding, and facilitating a more intuitive development experience. The findings of this study suggest that real-time architectural visualization significantly contributes to more efficient model development and a deeper understanding of model behaviors.","accessible_pdf":false,"authors":[{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of","Korea University, Seoul, Korea, Republic of"],"email":"juny0603@gmail.com","is_corresponding":true,"name":"JunYoung Choi"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of"],"email":"wings159@vience.co.kr","is_corresponding":false,"name":"Sohee Park"},{"affiliations":["Korea University, Seoul, Korea, Republic of"],"email":"hellenkoh@gmail.com","is_corresponding":false,"name":"GaYeon Koh"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of"],"email":"k0seo0330@vience.co.kr","is_corresponding":false,"name":"Youngseo Kim"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of","Korea University, Seoul, Korea, Republic of"],"email":"wkjeong@korea.ac.kr","is_corresponding":false,"name":"Won-Ki Jeong"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["JunYoung Choi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1047","time_end":"","time_stamp":"","time_start":"","title":"Intuitive Design of Deep Learning Models through Visual Feedback","uid":"v-short-1047","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1049":{"abstract":"This comparative study evaluates various neural surface reconstruction methods, particularly focusing on their implications for scientific visualization through reconstructing 3D surfaces via multi-view rendering images. We categorize ten methods into neural radiance fields and neural implicit surfaces, uncovering the benefits of leveraging distance functions (i.e., SDFs and UDFs) to enhance the accuracy and smoothness of the reconstructed surfaces. Our findings highlight the efficiency and quality of NeuS2 for reconstructing closed surfaces and identify NeUDF as a promising candidate for reconstructing open surfaces despite some limitations. We further pinpoint directions for future research, including improving detail capture, optimizing UDF computations, and refining surface extraction methods. By sharing our benchmark dataset, we invite researchers to test the performance of their methods, contributing to the advancement of surface reconstruction solutions for scientific visualization.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"syao2@nd.edu","is_corresponding":true,"name":"Siyuan Yao"},{"affiliations":["Wuhan University, Wuhan, China"],"email":"song.wx@whu.edu.cn","is_corresponding":false,"name":"Weixi Song"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Siyuan Yao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1049","time_end":"","time_stamp":"","time_start":"","title":"A Comparative Study of Neural Surface Reconstruction for Scientific Visualization","uid":"v-short-1049","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1054":{"abstract":"Direct volume rendering using ray-casting is widely used in practice. By using GPUs and applying acceleration techniques as empty space skipping, high frame rates are possible on modern hardware. This enables performance-critical use-cases such as virtual reality volume rendering. The currently fastest known technique uses volumetric distance maps to skip empty sections of the volume during ray-casting but requires the distance map to be updated per transfer function change. In this paper, we demonstrate a technique for subdividing the volume intensity range into partitions and deriving what we call partitioned distance maps. These can be used to accelerate the distance map computation for a newly changed transfer function by a factor up to 30. This allows the currently fastest known empty space skipping approach to be used while maintaining high frame rates even when the transfer function is changed frequently.","accessible_pdf":false,"authors":[{"affiliations":["University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria"],"email":"michael.rauter@fhwn.ac.at","is_corresponding":true,"name":"Michael Rauter"},{"affiliations":["Medical University of Vienna, Vienna, Austria"],"email":"lukas.a.zimmermann@meduniwien.ac.at","is_corresponding":false,"name":"Lukas Zimmermann PhD"},{"affiliations":["University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria"],"email":"markus.zeilinger@fhwn.ac.at","is_corresponding":false,"name":"Markus Zeilinger PhD"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Rauter"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1054","time_end":"","time_stamp":"","time_start":"","title":"Accelerating Transfer Function Update for Distance Map based Volume Rendering","uid":"v-short-1054","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1056":{"abstract":"We present FCNR, a fast compressive neural representation for tens of thousands of visualization images under varying viewpoints and timesteps. The existing NeRVI solution, albeit enjoying a high compression rate, incurs slow speeds in encoding and decoding. Built on the recent advances in stereo image compression, FCNR assimilates stereo context modules and joint context transfer modules to compress image pairs. Our solution significantly improves encoding and decoding speed while maintaining high reconstruction quality and satisfying compression rate. To demonstrate its effectiveness, we compare FCNR with state-of-the-art neural compression methods, including E-NeRV, HNeRV, NeRVI, and ECSIC.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"ylu25@nd.edu","is_corresponding":true,"name":"Yunfei Lu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"pgu@nd.edu","is_corresponding":false,"name":"Pengfei Gu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yunfei Lu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1056","time_end":"","time_stamp":"","time_start":"","title":"FCNR: Fast Compressive Neural Representation of Visualization Images","uid":"v-short-1056","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1057":{"abstract":"Real-world datasets often consist of quantitative and categorical variables. The analyst needs to focus on either kind separately or both jointly. We proposed a visualization technique tackling these challenges that supports visual cluster and set analysis. In this paper, we investigate how its visualization parameters affect the accuracy and speed of cluster and set analysis tasks in a controlled experiment. Our findings show that, with the proper settings, our visualization can support both task types well. However, we did not find settings suitable for the joint task, which provides opportunities for future research.","accessible_pdf":false,"authors":[{"affiliations":["TU Wien, Vienna, Austria"],"email":"nikolaus.piccolotto@tuwien.ac.at","is_corresponding":true,"name":"Nikolaus Piccolotto"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"mwallinger@ac.tuwien.ac.at","is_corresponding":false,"name":"Markus Wallinger"},{"affiliations":["Institute of Visual Computing and Human-Centered Technology, Vienna, Austria"],"email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"markus.boegl@tuwien.ac.at","is_corresponding":false,"name":"Markus B\u00f6gl"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nikolaus Piccolotto"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1057","time_end":"","time_stamp":"","time_start":"","title":"On Combined Visual Cluster and Set Analysis","uid":"v-short-1057","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1058":{"abstract":"Semantic interaction (SI) in Dimension Reduction (DR) of images allows users to incorporate feedback through direct manipulation of the 2D positions of images. Through interaction, users specify a set of pairwise relationships that the DR should aim to capture. Existing methods for images incorporate feedback into the DR through feature weights on abstract embedding features. However, if the original embedding features do not suitably capture the users task then the DR cannot either. We propose, ImageSI, an SI method for image DR that incorporates user feedback directly into the image model to update the underlying embeddings, rather than weighting them. In doing so, ImageSI ensures that the embeddings suitably capture the features necessary for the task so that the DR can subsequently organize images using those features. We present two variations of ImageSI using different loss functions - ImageSI_MDS-Inverse , which prioritizes the explicit pairwise relationships from the interaction and ImageSI_Triplet, which prioritizes clustering, using the interaction to define groups of images. Finally, we present a usage scenario and a simulation-based evaluation to demonstrate the utility of ImageSI and compare it to current methods.","accessible_pdf":false,"authors":[{"affiliations":["Vriginia Tech, Blacksburg, United States"],"email":"jiayuelin@vt.edu","is_corresponding":false,"name":"Jiayue Lin"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":true,"name":"Rebecca Faust"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Rebecca Faust"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1058","time_end":"","time_stamp":"","time_start":"","title":"ImageSI: Semantic Interaction for Deep Learning Image Projections","uid":"v-short-1058","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1059":{"abstract":"Gantt charts are a widely-used idiom for visualizing temporal discrete event sequence data where dependencies exist between events. They are popular in domains such as manufacturing and computing for their intuitive layout of such data. However, these domains frequently generate data at scales which tax both the visual representation and the ability to render it at interactive speeds. To aid visualization developers who use Gantt charts in these situations, we develop a task taxonomy of low level visualization tasks supported by Gantt charts and connect them to the data queries needed to support them. Our taxonomy is derived through a systematic literature survey of visualizations using Gantt charts over the past 30 years.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"sayefsakin@sci.utah.edu","is_corresponding":true,"name":"Sayef Azad Sakin"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sayef Azad Sakin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1059","time_end":"","time_stamp":"","time_start":"","title":"A Literature-based Visualization Task Taxonomy for Gantt charts","uid":"v-short-1059","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1062":{"abstract":"Annotations are a critical component of visualizations, helping viewers interpret the visual representation and highlighting critical data insights. Despite its significant role, we lack an understanding of how annotations can be incorporated into other data representations, such as physicalizations and sonifications. Given the emergent nature of these representations, sonifications and physicalizations lack formalized conventions (e.g., design space, vocabulary) that can introduce challenges for audiences to interpret the intended data encoding. To address this challenge, this work focuses on how annotations can be more tightly integrated into the design process of creating sonifications and physicalization. In an exploratory study with 13 designers, we explore how visualization annotation techniques can be adapted to sonic and physical modalities. Our work highlights how annotations for sonification and physicalizations are inseparable from their data encodings","accessible_pdf":false,"authors":[{"affiliations":["Whitman College, Walla Walla, United States"],"email":"sorensor@whitman.edu","is_corresponding":false,"name":"Rhys Sorenson-Graff"},{"affiliations":["University of Colorado Boulder, Boulder, United States"],"email":"sandra.bae@colorado.edu","is_corresponding":true,"name":"S. Sandra Bae"},{"affiliations":["Whitman College, Walla Walla, United States"],"email":"wirfsbro@colorado.edu","is_corresponding":false,"name":"Jordan Wirfs-Brock"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["S. Sandra Bae"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1062","time_end":"","time_stamp":"","time_start":"","title":"Integrating Annotations into the Design Process for Sonifications and Physicalizations","uid":"v-short-1062","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1064":{"abstract":"Large Language Models (LLMs) have demonstrated remarkable versatility in visualization authoring, but often generate suboptimal designs that are invalid or fail to adhere to design guidelines for effective visualization. We present Bavisitter, a natural language interface that integrates established visualization design guidelines into LLMs. Based on our survey on the design issues in LLM-generated visualizations, Bavisitter monitors the generated visualizations during a visualization authoring dialogue to detect an issue. When an issue is detected, it intervenes in the dialogue, suggesting possible solutions to the issue by modifying the prompts. We also demonstrate two use cases where Bavisitter detects and resolves design issues from the actual LLM-generated visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jiwnchoi@skku.edu","is_corresponding":true,"name":"Jiwon Choi"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"dlwodnd00@skku.edu","is_corresponding":false,"name":"Jaeung Lee"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jmjo@skku.edu","is_corresponding":false,"name":"Jaemin Jo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jiwon Choi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1064","time_end":"","time_stamp":"","time_start":"","title":"Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring","uid":"v-short-1064","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1065":{"abstract":"Although many dimensionality reduction (DR) techniques employ stochastic methods for computational efficiency, such as negative sampling or stochastic gradient descent, their impact on the projection has been underexplored. In this work, we investigate how such stochasticity affects the stability of projections and present a novel DR technique, GhostUMAP, to measure the pointwise instability of projections. Our idea is to introduce clones of data points, \"ghosts\", into UMAP's layout optimization process. Ghosts are designed to be completely passive: they do not affect any others but are influenced by attractive and repulsive forces from the original data points. After a single optimization run, GhostUMAP can capture the projection instability of data points by measuring the variance with the projected positions of their ghosts. We also present a successive halving technique to reduce the computation of GhostUMAP. Our results suggest that GhostUMAP can reveal unstable data points with a reasonable computational overhead.","accessible_pdf":false,"authors":[{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"mw.jung@skku.edu","is_corresponding":true,"name":"Myeongwon Jung"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"takanori.fujiwara@liu.se","is_corresponding":false,"name":"Takanori Fujiwara"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jmjo@skku.edu","is_corresponding":false,"name":"Jaemin Jo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Myeongwon Jung"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1065","time_end":"","time_stamp":"","time_start":"","title":"GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction","uid":"v-short-1065","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1068":{"abstract":"Integrating textual content, such as titles, annotations, and captions, with visualizations facilitates comprehension and takeaways during data exploration. Yet current tools often lack mechanisms for integrating meaningful text with visual data. This paper introduces DASH, a bimodal data exploration tool that supports integrating semantic levels into the interactive process of visualization and text-based analysis. DASH operationalizes a modified version of Lundgard et al.'s semantic hierarchy model that categorizes data descriptions into four levels ranging from basic encodings to high-level insights. By leveraging this structured semantic level framework and a large language model's text generation capabilities, DASH enables the creation of data-driven narratives via drag-and-drop user interaction. Through a preliminary user evaluation, we discuss the utility of DASH's text and chart integration capabilities when participants perform data exploration with the tool. Based on the study's feedback and observations, we discuss implications for designing unified text and chart authoring tools.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"bromley.denny@gmail.com","is_corresponding":true,"name":"Dennis Bromley"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dennis Bromley"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1068","time_end":"","time_stamp":"","time_start":"","title":"DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations","uid":"v-short-1068","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1072":{"abstract":"Recent advancements in vision models have significantly enhanced their ability to perform complex chart understanding tasks, such as chart captioning and chart question answering. However, assessing how these models process charts remains challenging. Existing benchmarks only coarsely evaluate how well the model performs the given task without thoroughly evaluating the underlying mechanisms that drive performance, such as how models extract image embeddings. This gap limits our understanding of the model's perceptual capabilities regarding fundamental graphical components. Therefore, we introduce a novel evaluation framework designed to assess the graphical perception of image embedding models. In the context of chart comprehension, we examine two main aspects of channel effectiveness: accuracy and discriminability of various visual channels. We first assess channel accuracy through the linearity of embeddings, which is the degree to which the perceived magnitude is proportional to the size of the stimulus. % based on the assumption that perceived magnitude should be proportional to the size of Conversely, distances between embeddings serve as a measure of discriminability; embeddings that are far apart can be considered discriminable. Our experiments on a general image embedding model, CLIP, provided that it perceives channel accuracy differently from humans and demonstrated distinct discriminability in specific channels such as length, tilt, and curvature. We aim to extend our work as a more general benchmark for reliable visual encoders and enhance a model for two distinctive goals for future applications: precise chart comprehension and mimicking human perception.","accessible_pdf":false,"authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"dtngus0111@gmail.com","is_corresponding":true,"name":"Soohyun Lee"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jangsus1@snu.ac.kr","is_corresponding":false,"name":"Minsuk Chang"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"shpark@hcil.snu.ac.kr","is_corresponding":false,"name":"Seokhyeon Park"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Soohyun Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1072","time_end":"","time_stamp":"","time_start":"","title":"Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness","uid":"v-short-1072","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1078":{"abstract":"Data visualizations are reaching global audiences. As people who use Right-to-left (RTL) scripts constitute over a billion potential data visualization users, a need emerges to investigate how visualizations are communicated to them. Web design guidelines exist to assist designers in adapting different reading directions, yet we lack a similar standard for visualization design. This paper investigates the design patterns of visualizations with RTL scripts. We collected 128 visualizations from data-driven articles published in Arabic news outlets and analyzed their chart composition, textual elements, and sources. Our analysis suggests that designers tend to apply RTL approaches more frequently for categorical data. In other situations, we observed a mix of Left-to-right (LTR) and RTL approaches for chart directions and structures, sometimes inconsistently utilized within the same article. We reflect on this lack of clear guidelines for RTL data visualizations and derive implications for visualization authoring tools and future research directions.","accessible_pdf":false,"authors":[{"affiliations":["University College London, London, United Kingdom","UAE University , Al Ain, United Arab Emirates"],"email":"muna.alebri.19@ucl.ac.uk","is_corresponding":true,"name":"Muna Alebri"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ntrakotondravony@wpi.edu","is_corresponding":false,"name":"No\u00eblle Rakotondravony"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ltharrison@wpi.edu","is_corresponding":false,"name":"Lane Harrison"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Muna Alebri"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1078","time_end":"","time_stamp":"","time_start":"","title":"Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content","uid":"v-short-1078","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1079":{"abstract":"Image datasets serve as the foundation for machine learning models in computer vision, significantly influencing model capabilities, performance, and biases alongside architectural considerations. Therefore, understanding the composition and distribution of these datasets has become increasingly crucial. To address the need for intuitive exploration of these datasets, we propose AEye, an extensible and scalable visualization tool tailored to image datasets. AEye utilizes a contrastively trained model to embed images into semantically meaningful high-dimensional representations, facilitating data clustering and organization. To visualize the high-dimensional representations, we project them onto a two-dimensional plane and arrange images in layers so users can seamlessly navigate and explore them interactively. Furthermore, AEye facilitates semantic search functionalities for both text and image queries, enabling users to search for content. We open-source the codebase for AEye, and provide a simple configuration to add additional datasets.","accessible_pdf":false,"authors":[{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"fgroetschla@ethz.ch","is_corresponding":false,"name":"Florian Gr\u00f6tschla"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"lanzendoerfer@ethz.ch","is_corresponding":false,"name":"Luca A Lanzend\u00f6rfer"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"mcalzavara@student.ethz.ch","is_corresponding":false,"name":"Marco Calzavara"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"wattenhofer@ethz.ch","is_corresponding":false,"name":"Roger Wattenhofer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Florian Gr\u221a\u2202tschla"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1079","time_end":"","time_stamp":"","time_start":"","title":"AEye: A Visualization Tool for Image Datasets","uid":"v-short-1079","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1081":{"abstract":"Sine illusion happens when the more quickly changing pairs of lines lead to bigger underestimates of the delta between them. We evaluate three visual manipulations on mitigating sine illusions: dotted lines, aligned gridlines, and offset gridlines via a user study. We asked participants to compare the deltas between two lines at two time points and found aligned gridlines to be the most effective in mitigating sine illusions. Using data from the user study, we produced a model that predicts the impact of the sine illusion in line charts by accounting for the ratio of the vertical distance between the two points of comparison. When the ratio is less than 50\\%, participants begin to be influenced by the sine illusion. This effect can be significantly exacerbated when the difference between the two deltas falls under 30\\%. We compared two explanations for the sine illusion based on our data: either participants were mistakenly using the perpendicular distance between the two lines to make their comparison (the perpendicular explanation), or they incorrectly relied on the length of the line segment perpendicular to the angle bisector of the bottom and top lines (the equal triangle explanation). We found the equal triangle explanation to be the more predictive model explaining participant behaviors.","accessible_pdf":false,"authors":[{"affiliations":["Google LLC, San Francisco, United States"],"email":"cknit1999@gmail.com","is_corresponding":false,"name":"Clayton J Knittel"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"jawuah3@gatech.edu","is_corresponding":false,"name":"Jane Awuah"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"franconeri@northwestern.edu","is_corresponding":false,"name":"Steven L Franconeri"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"cxiong@gatech.edu","is_corresponding":true,"name":"Cindy Xiong Bearfield"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Cindy Xiong Bearfield"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1081","time_end":"","time_stamp":"","time_start":"","title":"Gridlines Mitigate Sine Illusion in Line Charts","uid":"v-short-1081","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1089":{"abstract":"In healthcare, AI techniques are widely used for tasks like risk assessment and anomaly detection. Despite AI's potential as a valuable assistant, its role in complex medical data analysis often oversimplifies human-AI collaboration dynamics. To address this, we collaborated with a local hospital, engaging six physicians and one data scientist in a formative study. From this collaboration, we propose a framework integrating two-phase interactive visualization systems: one for Human-Led, AI-Assisted Retrospective Analysis and another for AI-Mediated, Human-Reviewed Iterative Modeling. This framework aims to enhance understanding and discussion around effective human-AI collaboration in healthcare.","accessible_pdf":false,"authors":[{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"ouyy@shanghaitech.edu.cn","is_corresponding":true,"name":"Yang Ouyang"},{"affiliations":["University of Illinois at Urbana-Champaign, Champaign, United States","University of Illinois at Urbana-Champaign, Champaign, United States"],"email":"zhang414@illinois.edu","is_corresponding":false,"name":"Chenyang Zhang"},{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"wanghe1@shanghaitech.edu.cn","is_corresponding":false,"name":"He Wang"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"15301050137@fudan.edu.cn","is_corresponding":false,"name":"Tianle Ma"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"cjiang_fdu@yeah.net","is_corresponding":false,"name":"Chang Jiang"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"522649732@qq.com","is_corresponding":false,"name":"Yuheng Yan"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"yan.zuoqin@zs-hospital.sh.cn","is_corresponding":false,"name":"Zuoqin Yan"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong","Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"mxj@cse.ust.hk","is_corresponding":false,"name":"Xiaojuan Ma"},{"affiliations":["Southeast University, Nanjing, China","Southeast University, Nanjing, China"],"email":"cshiag@connect.ust.hk","is_corresponding":false,"name":"Chuhan Shi"},{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yang Ouyang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1089","time_end":"","time_stamp":"","time_start":"","title":"A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling","uid":"v-short-1089","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1090":{"abstract":"Visualizing high dimensional data is challenging, since any dimensionality reduction technique will distort distances. A classic method in cartography\u2013Tissot\u2019s Indicatrix, specific to sphere-to-plane maps\u2013visualizes distortion using ellipses. Inspired by this idea, we describe the hypertrix: a method for representing distortions that occur when data is projected from arbitrarily high dimensions onto a 2D plane. We demonstrate our technique through synthetic and real-world datasets, and describe how this indicatrix can guide interpretations of nonlinear dimensionality reduction","accessible_pdf":false,"authors":[{"affiliations":["Harvard University, Boston, United States"],"email":"sraval@g.harvard.edu","is_corresponding":true,"name":"Shivam Raval"},{"affiliations":["Harvard University, Cambridge, United States","Google Research, Cambridge, United States"],"email":"viegas@google.com","is_corresponding":false,"name":"Fernanda Viegas"},{"affiliations":["Harvard University, Cambridge, United States","Google Research, Cambridge, United States"],"email":"wattenberg@gmail.com","is_corresponding":false,"name":"Martin Wattenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shivam Raval"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1090","time_end":"","time_stamp":"","time_start":"","title":"Hypertrix: An indicatrix for high-dimensional visualizations","uid":"v-short-1090","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1096":{"abstract":"Coordinated multiple views (CMV) in a visual analytics system can help users explore multiple data representations simultaneously with linked interactions. However, the implementation of coordinated multiple views can be challenging. Without standard software libraries, visualization designers need to re-implement CMV during the development of each system. We introduce use-coordination, a grammar and software library that supports the efficient implementation of CMV. The grammar defines a JSON-based representation for an abstract coordination model from the information visualization literature. We contribute an optional extension to the model and grammar that allows for hierarchical coordination. Through three use cases, we show that use-coordination enables implementation of CMV in systems containing not only basic statistical charts but also more complex visualizations such as medical imaging volumes. We describe six software extensions, including a graphical editor for manipulation of coordination, which showcase the potential to build upon our coordination-focused declarative approach.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"mark_keller@hms.harvard.edu","is_corresponding":true,"name":"Mark S Keller"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"trevor_manz@g.harvard.edu","is_corresponding":false,"name":"Trevor Manz"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mark S Keller"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1096","time_end":"","time_stamp":"","time_start":"","title":"Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views","uid":"v-short-1096","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1097":{"abstract":"Visualization tools now commonly present automated insights highlighting salient data patterns, including correlations, distributions, outliers, and differences, among others. While these insights are valuable for data exploration and chart interpretation, users currently only have a binary choice of accepting or rejecting them, lacking the flexibility to refine the system logic or customize the insight generation process. To address this limitation, we present GROOT, a prototype system that allows users to proactively specify and refine automated data insights. The system allows users to directly manipulate chart elements to receive insight recommendations based on their selections. Additionally, GROOT provides users with a manual editing interface to customize, reconfigure, or add new insights to individual charts and propagate them to future explorations. We describe a usage scenario to illustrate how these features collectively support insight editing and configuration, and discuss opportunities for future work including incorporating LLMs, improving semantic data and visualization search, and supporting insight management.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, College Park, United States","Tableau Research, Seattle, United States"],"email":"sgathani@cs.umd.edu","is_corresponding":true,"name":"Sneha Gathani"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"amcrisan@uwaterloo.ca","is_corresponding":false,"name":"Anamaria Crisan"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"arjun.srinivasan.10@gmail.com","is_corresponding":false,"name":"Arjun Srinivasan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sneha Gathani"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1097","time_end":"","time_stamp":"","time_start":"","title":"Groot: An Interface for Editing and Configuring Automated Data Insights","uid":"v-short-1097","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1100":{"abstract":"Confidence scores of automatic speech recognition (ASR) outputs are often inadequately communicated, preventing its seamless integration into analytical workflows. In this paper, we introduce ConFides, a visual analytic system developed in collaboration with intelligence analysts to address this issue. ConFides aims to aid exploration and post-AI-transcription editing by visually representing the confidence associated with the transcription. We demonstrate how our tool can assist intelligence analysts who use ASR outputs in their analytical and exploratory tasks and how it can help mitigate misinterpretation of crucial information. We also discuss opportunities for improving textual data cleaning and model transparency for human-machine collaboration.","accessible_pdf":false,"authors":[{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"sha@wustl.edu","is_corresponding":true,"name":"Sunwoo Ha"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"chaelim@wustl.edu","is_corresponding":false,"name":"Chaehun Lim"},{"affiliations":["Smith College, Northampton, United States"],"email":"jcrouser@smith.edu","is_corresponding":false,"name":"R. Jordan Crouser"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"alvitta@wustl.edu","is_corresponding":false,"name":"Alvitta Ottley"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sunwoo Ha"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1100","time_end":"","time_stamp":"","time_start":"","title":"ConFides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration","uid":"v-short-1100","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1101":{"abstract":"Color coding, a technique assigning specific colors to different information types, has proven advantages in aiding human cognitive activities, especially reading and comprehension. The rise of Large Language Models (LLMs) has streamlined document coding, enabling simple automatic text labeling with various schemes. This has the potential to make color-coding more accessible and benefit more users. However, the importance of color choice, particularly in aiding textual information seeking through various color schemes, is not well studied. This paper presents a user study assessing the effectiveness of various color schemes generated by different base colors for readers' information-seeking performance in text documents color-coded by LLMs. Participants performed information-seeking tasks within scholarly papers' abstracts, each coded with a different scheme under time constraints. Results showed that non-analogous color schemes lead to better information-seeking performance, in both accuracy and response time. Yellow-inclusive color schemes lead to shorter response times and are also preferred by most participants. These could inform the better choice of color scheme for annotating text documents. As LLMs advance document coding, we advocate for more research focusing on the \"color\" aspect of color-coding techniques.","accessible_pdf":false,"authors":[{"affiliations":["Pennsylvania State University, University Park, United States"],"email":"samnghoyin@gmail.com","is_corresponding":true,"name":"Ho Yin Ng"},{"affiliations":["Pennsylvania State University, University Park, United States"],"email":"zmh5268@psu.edu","is_corresponding":false,"name":"Zeyu He"},{"affiliations":["Pennsylvania State University, University Park , United States"],"email":"txh710@psu.edu","is_corresponding":false,"name":"Ting-Hao Kenneth Huang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ho Yin Ng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1101","time_end":"","time_stamp":"","time_start":"","title":"What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?","uid":"v-short-1101","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1109":{"abstract":"Homophily refers to the tendency of individuals to associate with others who are similar to them in characteristics, such as, race, ethnicity, age, gender, or interests. In this paper, we investigate if individuals exhibit racial homophily when viewing visualizations, using mass shooting data in the United States as the example topic. We conducted a crowdsourced experiment (N=450) where each participant was shown a visualization displaying the counts of mass shooting victims, highlighting the counts for one of three racial groups (White, Black, or Hispanic). Participants were assigned to view visualizations highlighting their own race or a different race to assess the influence of racial concordance on changes in affect (emotion) and attitude towards gun control. While we did not find evidence of homophily, the results showed a significant negative shift in affect across all visualization conditions. Notably, political ideology significantly impacted changes in affect, with more liberal views correlating with a more negative affect change. Our findings underscore the complexity of reactions to mass shooting visualizations and highlight the need for additional measures for understanding homophily in visualizations.","accessible_pdf":false,"authors":[{"affiliations":["New York University, Brooklyn, United States"],"email":"pt2393@nyu.edu","is_corresponding":true,"name":"Poorna Talkad Sukumar"},{"affiliations":["New York University, Brooklyn, United States"],"email":"mporfiri@nyu.edu","is_corresponding":false,"name":"Maurizio Porfiri"},{"affiliations":["New York University, New York, United States"],"email":"onov@nyu.edu","is_corresponding":false,"name":"Oded Nov"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Poorna Talkad Sukumar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1109","time_end":"","time_stamp":"","time_start":"","title":"Connections Beyond Data: Exploring Homophily With Visualizations","uid":"v-short-1109","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1114":{"abstract":"As visualization literacy and its implications gain prominence, we need effective methods to teach and prepare students for the variety of visualizations they might encounter in an increasingly data-driven world. Recently, the potential of comics has been recognized in various data visualization contexts, including educational settings. In this paper, we describe the development of a workshop in which we use our \u201ccomic construction kit\u201d as a tool for students to understand various data visualization techniques through an interactive creative approach of creating explanatory comics. We report on our insights and learnings from holding eight workshops with high school students, high school teachers, university students, and university lecturers, aiming to enhance the landscape of hands-on visualization activities that can enrich the visualization classroom. The comic construction kit and all supplemental materials are open source under a CC-BY license and available at https://fhstp.github.io/comixplain/vis4schools.html.","accessible_pdf":false,"authors":[{"affiliations":["St. P\u00f6lten University of Applied Sciences, St. P\u00f6lten, Austria"],"email":"magdalena.boucher@fhstp.ac.at","is_corresponding":true,"name":"Magdalena Boucher"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"christina.stoiber@fhstp.ac.at","is_corresponding":false,"name":"Christina Stoiber"},{"affiliations":["School of Informatics, Communications and Media, Hagenberg im M\u00fchlkreis, Austria"],"email":"mandy.keck@fh-hagenberg.at","is_corresponding":false,"name":"Mandy Keck"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"victor.oliveira@fhstp.ac.at","is_corresponding":false,"name":"Victor Adriel de Jesus Oliveira"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"wolfgang.aigner@fhstp.ac.at","is_corresponding":false,"name":"Wolfgang Aigner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Magdalena Boucher"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1114","time_end":"","time_stamp":"","time_start":"","title":"The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations","uid":"v-short-1114","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1116":{"abstract":"Visualizations support rapid analysis of scientific datasets, allowing viewers to glean aggregate information (e.g., the mean) within split-seconds. While prior research has explored this ability in conventional charts, it is unclear if spatial visualizations used by computational scientists afford a similar ensemble perception capacity. We investigate people's ability to estimate two summary statistics, mean and variance, from pseudocolor scalar fields. In a crowdsourced experiment, we find that participants can reliably characterize both statistics, although variance discrimination requires a much stronger signal. Multi-hue and diverging colormaps outperformed monochromatic, luminance ramps in aiding this extraction. Analysis of qualitative responses suggests that participants often estimate the distribution of hotspots and valleys as visual proxies for data statistics. These findings suggest that people's summary interpretation of spatial datasets is likely driven by the appearance of discrete color segments, rather than assessments of overall luminance. Implicit color segmentation in quantitative displays could thus prove more useful than previously assumed by facilitating quick, gist-level judgments about color-coded visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"vmateevitsi@anl.gov","is_corresponding":false,"name":"Victor A. Mateevitsi"},{"affiliations":["Argonne National Laboratory, Lemont, United States","University of Illinois Chicago, Chicago, United States"],"email":"papka@anl.gov","is_corresponding":false,"name":"Michael E. Papka"},{"affiliations":["Indiana University, Indianapolis, United States"],"email":"redak@iu.edu","is_corresponding":true,"name":"Khairi Reda"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Khairi Reda"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1116","time_end":"","time_stamp":"","time_start":"","title":"Science in a Blink: Supporting Ensemble Perception in Scalar Fields","uid":"v-short-1116","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1117":{"abstract":"Geovisualizations are powerful tools for exploratory spatial analysis, enabling sighted users to discern patterns, trends, and relationships within geographic data. However, these visual tools have remained largely inaccessible to screen-reader users. We present AltGeoViz, a new system we designed to facilitate geovisualization exploration for these users. AltGeoViz dynamically generates alt-text descriptions based on the user's current map view, providing summaries of spatial patterns and descriptive statistics. In a study of five screen-reader users, we found that AltGeoViz enabled them to interact with geovisualizations in previously infeasible ways. Participants demonstrated a clear understanding of data summaries and their location context, and they could synthesize spatial understandings of their explorations. Moreover, we identified key areas for improvement, such as the addition of intuitive spatial navigation controls and comparative analysis features.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"chuchuli@cs.washington.edu","is_corresponding":true,"name":"Chu Li"},{"affiliations":["University of Washington, Seattle, United States"],"email":"ypang2@cs.washington.edu","is_corresponding":false,"name":"Rock Yuren Pang"},{"affiliations":["University of Washington, Seattle, United States"],"email":"asharif@cs.washington.edu","is_corresponding":false,"name":"Ather Sharif"},{"affiliations":["University of Washington, Seattle, United States"],"email":"chheda@cs.washington.edu","is_corresponding":false,"name":"Arnavi Chheda-Kothary"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jonf@cs.uw.edu","is_corresponding":false,"name":"Jon E. Froehlich"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chu Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1117","time_end":"","time_stamp":"","time_start":"","title":"AltGeoViz: Facilitating Accessible Geovisualization","uid":"v-short-1117","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1119":{"abstract":"Analyzing uncertainty in spatial data is a vital task in many domains, as for example with climate and weather simulation ensembles. Although there are many methods to support the analysis of the uncertainty, such as uncertain isocontours or calculation of statistical values, it is still a challenge to get an overview of the uncertainty and then decide a further method or parameter to analyze the data, or investigate further some region or point of interest. We present cumulative height fields, a visualization method for 2D scalar field ensembles using the marginal empirical distribution function and show preliminary results using volume rendering and slicing for the Max Planck Institute Grand Ensemble.","accessible_pdf":false,"authors":[{"affiliations":["Institute of Computer Science, Leipzig University, Leipzig, Germany"],"email":"daetz@informatik.uni-leipzig.de","is_corresponding":true,"name":"Tomas Rodolfo Daetz Chacon"},{"affiliations":["German Climate Computing Center (DKRZ), Hamburg, Germany"],"email":"boettinger@dkrz.de","is_corresponding":false,"name":"Michael B\u00f6ttinger"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"scheuermann@informatik.uni-leipzig.de","is_corresponding":false,"name":"Gerik Scheuermann"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"heine@informatik.uni-leipzig.de","is_corresponding":false,"name":"Christian Heine"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tomas Rodolfo Daetz Chacon"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1119","time_end":"","time_stamp":"","time_start":"","title":"Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function","uid":"v-short-1119","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1121":{"abstract":"Many real-world networks contain structurally-equivalent nodes. These are defined as vertices that share the same set of neighboring nodes, making them interchangeable with a traditional graph layout approach. However, many real-world graphs also have properties associated with nodes, adding additional meaning to them. We present an approach for swapping locations of structurally-equivalent nodes in graph layout so that those with more similar properties have closer proximity to each other. This improves the usefulness of the visualization from an attribute perspective without negatively impacting the visualization from a structural perspective. We include an algorithm for finding these sets of nodes in linear time, as well as methodologies for ordering nodes based on their attribute similarity, which works for scalar, ordinal, multidimensional, and categorical data.","accessible_pdf":false,"authors":[{"affiliations":["Pacific Northwest National Lab, Richland, United States"],"email":"patrick.mackey@pnnl.gov","is_corresponding":true,"name":"Patrick Mackey"},{"affiliations":["University of Arizona, Tucson, United States","Pacific Northwest National Laboratory, Richland, United States"],"email":"jacobmiller1@arizona.edu","is_corresponding":false,"name":"Jacob Miller"},{"affiliations":["Pacific Northwest National Laboratory, Richland, United States"],"email":"liz.f@pnnl.gov","is_corresponding":false,"name":"Liz Faultersack"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Patrick Mackey"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1121","time_end":"","time_stamp":"","time_start":"","title":"Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes","uid":"v-short-1121","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1126":{"abstract":"Psychological research often involves understanding psychological constructs through conducting factor analysis on data collected by a questionnaire, which can comprise hundreds of questions. Without interactive systems for interpreting factor models, researchers are frequently exposed to subjectivity, potentially leading to misinterpretations or overlooked crucial information. This paper introduces FAVis, a novel interactive visualization tool designed to aid researchers in interpreting and evaluating factor analysis results. FAVis enhances the understanding of relationships between variables and factors by supporting multiple views for visualizing factor loadings and correlations, allowing users to analyze information from various perspectives. The primary feature of FAVis is to enable users to set optimal thresholds for factor loadings to balance clarity and information retention. FAVis also allows users to assign tags to variables, enhancing the understanding of factors by linking them to their associated psychological constructs. We conduct a case study on a dataset from the Motivational State Questionnaire, utilizing a three-factor common factor model. Our user study demonstrates the utility of FAVis in various tasks.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States","University of Notre Dame, Notre Dame, United States"],"email":"ylu22@nd.edu","is_corresponding":true,"name":"Yikai Lu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yikai Lu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1126","time_end":"","time_stamp":"","time_start":"","title":"FAVis: Visual Analytics of Factor Analysis for Psychological Research","uid":"v-short-1126","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1127":{"abstract":"In this paper, we analyze the Apple Vision Pro hardware and the visionOS software platform, assessing their capabilities for volume rendering of structured grids, a prevalent technique across various applications. The Apple Vision Pro supports multiple display modes, from classical augmented reality (AR) using video see-through technology to immersive virtual reality (VR) environments that exclusively render virtual objects. These modes utilize different APIs and exhibit distinct capabilities. Our focus is on direct volume rendering, selected for its implementation challenges due to the native graphics APIs being predominantly oriented towards surface shading. Volume rendering is particularly vital in fields where AR and VR visualizations offer substantial benefits, such as in medicine and manufacturing. Despite its initial high cost, we anticipate that the Vision Pro will become more accessible and affordable over time, following Apple's track record of market expansion. As these devices become more prevalent, understanding how to effectively program and utilize them becomes increasingly important, offering significant opportunities for innovation and practical applications in various sectors.","accessible_pdf":false,"authors":[{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"camilla.hrycak@uni-due.de","is_corresponding":true,"name":"Camilla Hrycak"},{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"david.lewakis@stud.uni-due.de","is_corresponding":false,"name":"David Lewakis"},{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"jens.krueger@uni-due.de","is_corresponding":false,"name":"Jens Harald Krueger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Camilla Hrycak"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1127","time_end":"","time_stamp":"","time_start":"","title":"Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization","uid":"v-short-1127","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1130":{"abstract":"Visualization, from simple line plots to complex high-dimensional visual analysis systems, has established itself throughout numerous domains to explore, analyze, and evaluate data. Applying such visualizations in the context of simulation science where High-Performance Computing (HPC) produces ever-growing amounts of data that is more complex, potentially multidimensional, and multi-modal, takes up resources and a high level of technological experience often not available to domain experts. In this work, we present DaVE - a curated database of visualization examples, which aims to provide state-of-the-art and advanced visualization methods that arise in the context of HPC applications. Based on domain- or data-specific descriptors entered by the user, DaVE provides a list of appropriate visualization techniques, each accompanied by descriptions, examples, references, and resources. Sample code, adaptable container templates, and recipes for easy integration in HPC applications can be downloaded for easy access to high-fidelity visualizations. While the database is currently filled with a limited number of entries based on a broad evaluation of needs and challenges of current HPC users, DaVE is designed to be easily extended by experts from both the visualization and HPC communities.","accessible_pdf":false,"authors":[{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"koenen@informatik.rwth-aachen.de","is_corresponding":true,"name":"Jens Koenen"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"m.petersen@rptu.de","is_corresponding":false,"name":"Marvin Petersen"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"garth@rptu.de","is_corresponding":false,"name":"Christoph Garth"},{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"gerrits@vis.rwth-aachen.de","is_corresponding":false,"name":"Tim Gerrits"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jens Koenen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1130","time_end":"","time_stamp":"","time_start":"","title":"DaVE - A Curated Database of Visualization Examples","uid":"v-short-1130","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1135":{"abstract":"Humans struggle to perceive and interpret high-dimensional data. Therefore, high-dimensional data are often projected into two dimensions for visualization. Many applications benefit from complex nonlinear dimensionality reduction techniques, but the effects of individual high-dimensional features are hard to explain in the two-dimensional space. Most visualization solutions use multiple two-dimensional plots, each showing the effect of one high-dimensional feature in two dimensions; this approach creates a need for a visual inspection of k plots for a k-dimensional input space. Our solution, Feature Clock, provides a novel approach that eliminates the need to inspect these k plots to grasp the influence of original features on the data structure depicted in two dimensions. Feature Clock enhances the explainability and compactness of visualizations of embedded data and is available in an open-source Python library.","accessible_pdf":false,"authors":[{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"ovcharenko.folga@gmail.com","is_corresponding":true,"name":"Olga Ovcharenko"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"rita.sevastjanova@uni-konstanz.de","is_corresponding":false,"name":"Rita Sevastjanova"},{"affiliations":["ETH Zurich, Z\u00fcrich, Switzerland"],"email":"valentina.boeva@inf.ethz.ch","is_corresponding":false,"name":"Valentina Boeva"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Olga Ovcharenko"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1135","time_end":"","time_stamp":"","time_start":"","title":"Feature Clock: High-Dimensional Effects in Two-Dimensional Plots","uid":"v-short-1135","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1144":{"abstract":"Reconstruction of 3D scenes from 2D images is a technical challenge that impacts domains from Earth and planetary sciences and space exploration to augmented and virtual reality. Typically, reconstruction algorithms first identify common features across images and then minimize reconstruction errors after estimating the shape of the terrain. This bundle adjustment (BA) step optimizes around a single, simplifying scalar value that obfuscates many possible causes of reconstruction errors (e.g., initial estimate of the position and orientation of the camera, lighting conditions, ease of feature detection in the terrain). Reconstruction errors can lead to inaccurate scientific inferences or endanger a spacecraft exploring a remote environment. To address this challenge, we present VECTOR, a visual analysis tool that improves error inspection for stereo reconstruction BA. VECTOR provides analysts with previously unavailable visibility into feature locations, camera pose, and computed 3D points. VECTOR was developed in partnership with the Perseverance Mars Rover and Ingenuity Mars Helicopter terrain reconstruction team at the NASA Jet Propulsion Laboratory. We report on how this tool was used to debug and improve terrain reconstruction for the Mars 2020 mission.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"racquel.fygenson@gmail.com","is_corresponding":false,"name":"Racquel Fygenson"},{"affiliations":["Weta FX, Auckland, New Zealand"],"email":"kjawad@andrew.cmu.edu","is_corresponding":false,"name":"Kazi Jawad"},{"affiliations":["Art Center, Pasadena, United States"],"email":"zongzhanisabelli@gmail.com","is_corresponding":false,"name":"Zongzhan Li"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"francois.ayoub@jpl.nasa.gov","is_corresponding":false,"name":"Francois Ayoub"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"bob.deen@jpl.nasa.gov","is_corresponding":false,"name":"Robert G Deen"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"sd@scottdavidoff.com","is_corresponding":false,"name":"Scott Davidoff"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["NASA-JPL, Pasadena, United States"],"email":"mauricio.a.hess.flores@jpl.nasa.gov","is_corresponding":true,"name":"Mauricio Hess-Flores"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mauricio Hess-Flores"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1144","time_end":"","time_stamp":"","time_start":"","title":"Opening the black box of 3D reconstruction error analysis with VECTOR","uid":"v-short-1144","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1146":{"abstract":"Millions of runners rely on smart watches that display running-related metrics such as pace, heart rate and distance for training and racing -- mostly with text and numbers. Although research tells us that visualizations are a good alternative to text on smart watches, we know little about how visualizations can help in realistic running scenarios. We conducted a study in which 20 runners completed running-related tasks on an outdoor track using both text and visualizations. Our results show that runners are 1.5 to 8 times faster in completing those tasks with visualizations than with text, prefer visualizations to text, and would use such visualizations while running -- were they available on their smart watch.","accessible_pdf":false,"authors":[{"affiliations":["University of Victoria, Victoria, Canada"],"email":"sarinaksj@uvic.ca","is_corresponding":false,"name":"Sarina Kashanj"},{"affiliations":["University of Victoria, Victoira, Canada","Delft University of Technology, Delft, Netherlands"],"email":"xiyao.wang23@gmail.com","is_corresponding":false,"name":"Xiyao Wang"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Charles Perin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1146","time_end":"","time_stamp":"","time_start":"","title":"Visualizations on Smart Watches while Running: It Actually Helps!","uid":"v-short-1146","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1150":{"abstract":"Exploratory visual data analysis tools empower data analysts to efficiently and intuitively explore data insights throughout the entire analysis cycle. However, the gap between common programmatic analysis (e.g., within computational notebooks) and exploratory visual analysis leads to a disjointed and inefficient data analysis experience. To bridge this gap, we developed PyGWalker, a Python library that offers on-the-fly assistance for exploratory visual data analysis. It features a lightweight and intuitive GUI with a shelf builder modality. Its loosely coupled architecture supports multiple computational environments to accommodate varying data sizes. Since its release in February 2023, PyGWalker has gained much attention, with 468k downloads on PyPI and over 9.8k stars on GitHub as of April 2024. This demonstrates its value to the data science and visualization community, with researchers and developers integrating it into their own applications and studies.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China","Kanaries Data Inc., Hangzhou, China"],"email":"yue.yu@connect.ust.hk","is_corresponding":true,"name":"Yue Yu"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"lshenaj@connect.ust.hk","is_corresponding":false,"name":"Leixian Shen"},{"affiliations":["Kanaries Data Inc., Hangzhou, China"],"email":"feilong@kanaries.net","is_corresponding":false,"name":"Fei Long"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":["Kanaries Data Inc., Hangzhou, China"],"email":"haochen@kanaries.net","is_corresponding":false,"name":"Hao Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yue Yu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1150","time_end":"","time_stamp":"","time_start":"","title":"PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis","uid":"v-short-1150","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1155":{"abstract":"Augmented reality (AR) area labels can highlight real-life objects, visualize real world regions with arbitrary boundaries, and show invisible objects or features. Environment conditions such as lighting and clutter can decrease fixed or passive label visibility, and labels that have high opacity levels can occlude crucial details in the environment. We design and evaluate active AR area label visualization modes to enhance visibility across real-life environments, while still retaining environment details within the label. For this, we define a distant characteristic color from the environment in perceptual CIELAB space, then introduce spatial variations among label pixel colors based on the underlying environment variation. In a user study with 18 participants, we discovered that our active label visualization modes can be comparable in visibility to a fixed green baseline by Gabbard et al., and can outperform it with added spatial variation in cluttered environments, across varying levels of lighting (e.g., nighttime), and in environments with colors similar to the fixed baseline color.","accessible_pdf":false,"authors":[{"affiliations":["Brown University, Providence, United States"],"email":"hojung_kwon@brown.edu","is_corresponding":false,"name":"Hojung Kwon"},{"affiliations":["Brown University, Providence, United States"],"email":"yuanbo_li@brown.edu","is_corresponding":false,"name":"Yuanbo Li"},{"affiliations":["Brown University, Providence, United States"],"email":"chloe_ye2019@hotmail.com","is_corresponding":false,"name":"Xiaohan Ye"},{"affiliations":["Brown University, Providence, United States"],"email":"praccho_muna-mcquay@brown.edu","is_corresponding":false,"name":"Praccho Muna-McQuay"},{"affiliations":["Duke University, Durham, United States"],"email":"liuren.yin@duke.edu","is_corresponding":false,"name":"Liuren Yin"},{"affiliations":["Brown University, Providence, United States"],"email":"james_tompkin@brown.edu","is_corresponding":true,"name":"James Tompkin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["James Tompkin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1155","time_end":"","time_stamp":"","time_start":"","title":"Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality","uid":"v-short-1155","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1156":{"abstract":"Compound graphs are networks in which vertices can be grouped into larger subsets, with these subsets capable of further grouping, resulting in a nesting that can be many levels deep. Such graphs arise in several applications including biological workflows, chemical equations, and computational data flow analysis. Common layouts prioritize the lowest level of the grouping, down to the individual ungrouped vertices, which can make the higher level grouped structures more difficult to discern, especially in deeply nested networks. We contribute an overview+detail layout that preserves the saliency of the higher level network structure when groups are expanded to show internal nested structure. Our layout draws inner structures adjacent to their parents, using a modified tree layout to place substructures. We describe our algorithm and then present case studies demonstrating the layout's utility to a domain expert working on data flow analysis. Finally, we discuss network parameters and analysis situations in which our layout is well suited.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"hatch.on27@gmail.com","is_corresponding":true,"name":"Chang Han"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"lieffers@arizona.edu","is_corresponding":false,"name":"Justin Lieffers"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"claytonm@arizona.edu","is_corresponding":false,"name":"Clayton Morrison"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chang Han"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1156","time_end":"","time_stamp":"","time_start":"","title":"An Overview + Detail Layout for Visualizing Compound Graphs","uid":"v-short-1156","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1159":{"abstract":"With two studies, we assess how different walking trajectories (straight line, circular, and infinity) and speeds (2 km/h, 4 km/h, and 6 km/h) influence the accuracy and response time of participants reading micro visualizations on a smartwatch. We showed our participants common watch face micro visualizations including date, time, weather information, and four complications showing progress charts of fitness data. Our findings suggest that while walking trajectories did not significantly affect reading performance, overall walking activity, especially at high speeds, hurt reading accuracy and, to some extent, response time.","accessible_pdf":false,"authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"fairouz.grioui@vis.uni-stuttgart.de","is_corresponding":true,"name":"Fairouz Grioui"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"research@blascheck.eu","is_corresponding":false,"name":"Tanja Blascheck"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"yaolijie0219@gmail.com","is_corresponding":false,"name":"Lijie Yao"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fairouz Grioui"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1159","time_end":"","time_stamp":"","time_start":"","title":"Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking","uid":"v-short-1159","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1161":{"abstract":"Digital twins are an excellent tool to model, visualize, and simulate complex systems, to understand and optimize their operation. In this work, we present the technical challenges of real-time visualization of a digital twin of the Frontier supercomputer. We show the initial prototype and current state of the twin and highlight technical design challenges of visualizing such a large High Performance Computing (HPC) system. The goal is to understand the use of augmented reality as a primary way to extract information and collaborate on digital twins of complex systems. This leverages the spatio-temporal aspect of a 3D representation of a digital twin, with the ability to view historical and real-time telemetry, triggering simulations of a system state and viewing the results, which can be augmented via dashboards for details. Finally, we discuss considerations and opportunities for augmented reality of digital twins of large-scale, parallel computers.","accessible_pdf":false,"authors":[{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"maiterthm@ornl.gov","is_corresponding":true,"name":"Matthias Maiterth"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"brewerwh@ornl.gov","is_corresponding":false,"name":"Wes Brewer"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"dewetd@ornl.gov","is_corresponding":false,"name":"Dane De Wet"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"greenwoodms@ornl.gov","is_corresponding":false,"name":"Scott Greenwood"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kumarv@ornl.gov","is_corresponding":false,"name":"Vineet Kumar"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"hinesjr@ornl.gov","is_corresponding":false,"name":"Jesse Hines"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"bouknightsl@ornl.gov","is_corresponding":false,"name":"Sedrick L Bouknight"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"wangz@ornl.gov","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Hewlett Packard Enterprise, Berkshire, United Kingdom"],"email":"tim.dykes@hpe.com","is_corresponding":false,"name":"Tim Dykes"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"fwang2@ornl.gov","is_corresponding":false,"name":"Feiyi Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Matthias Maiterth"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1161","time_end":"","time_stamp":"","time_start":"","title":"Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities","uid":"v-short-1161","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1163":{"abstract":"Integral curves have been widely used to represent and analyze various vector fields. Curve-based clustering and pattern search approaches are usually applied to aid the identification of meaningful patterns from large numbers of integral curves. However, they need not support an interactive, level-of-detail exploration of these patterns. To address this, we propose a Curve Segment Neighborhood Graph (CSNG) to capture the relationships between neighboring curve segments. This graph representation enables us to adapt the fast community detection algorithm, i.e., the Louvain algorithm, to identify individual graph communities from CSNG. Our results show that these communities often correspond to the features of the flow. To achieve a multi-level interactive exploration of the detected communities, we adapt a force-directed layout that allows users to refine and re-group communities based on their domain knowledge. We incorporate the proposed techniques into an interactive system to enable effective analysis and interpretation of complex patterns in large-scale integral curve datasets.","accessible_pdf":false,"authors":[{"affiliations":["University of Houston, Houston, United States"],"email":"nguyenpkk95@gmail.com","is_corresponding":true,"name":"Nguyen K Phan"},{"affiliations":["University of Houston, Houston, United States"],"email":"chengu@cs.uh.edu","is_corresponding":false,"name":"Guoning Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nguyen K Phan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1163","time_end":"","time_stamp":"","time_start":"","title":"Curve Segment Neighborhood-based Vector Field Exploration","uid":"v-short-1163","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1166":{"abstract":"Custom animated visualizations of large, complex datasets are helpful across many domains, but they are hard to develop. Much of the difficulty arises from maintaining visualization state across a large set of animated graphical elements that may change in number over time. We contribute Counterpoint, a framework for state management designed to help implement such visualizations in JavaScript. Using Counterpoint, developers can manipulate large collections of marks with reactive attributes that are easy to render in scalable APIs such as Canvas and WebGL. Counterpoint also helps orchestrate the entry and exit of graphical elements using the concept of a rendering \"stage.\" Through a performance evaluation, we show that Counterpoint adds minimal overhead over current high-performance rendering techniques while simplifying implementation. We also provide two examples of visualizations created using Counterpoint that illustrate its flexibility and compatibility with other visualization toolkits as well as considerations for users with disabilities. Counterpoint is open-source and available at https://github.com/cmudig/counterpoint.","accessible_pdf":false,"authors":[{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"vsivaram@andrew.cmu.edu","is_corresponding":true,"name":"Venkatesh Sivaraman"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"fje@cmu.edu","is_corresponding":false,"name":"Frank Elavsky"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"adamperer@cmu.edu","is_corresponding":false,"name":"Adam Perer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Venkatesh Sivaraman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1166","time_end":"","time_stamp":"","time_start":"","title":"Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations","uid":"v-short-1166","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1173":{"abstract":"Visualizing citation relations with network structures is widely used, but the visual complexity can make it challenging for individual researchers to navigate through them. We collected data from 18 researchers using an interface that we designed using network simplification methods and analyzed how users browsed and identified important papers. Our analysis reveals six major patterns used for identifying papers of interest, which can be categorized into three key components: Fields, Bridges, and Foundations, each viewed from two distinct perspectives: layout-oriented and connection-oriented. The connection-oriented approach was found to be more effective for selecting relevant papers, but the layout-oriented method was adopted more often, even though it led to unexpected results and user frustration. Our findings emphasize the importance of integrating these components and the necessity to balance visual layouts with meaningful connections to enhance the effectiveness of citation networks in academic browsing systems.","accessible_pdf":false,"authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"krchoe@hcil.snu.ac.kr","is_corresponding":true,"name":"Kiroong Choe"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"gracekim027@snu.ac.kr","is_corresponding":false,"name":"Eunhye Kim"},{"affiliations":["Dept. of Electrical and Computer Engineering, SNU, Seoul, Korea, Republic of"],"email":"paulmoguri@snu.ac.kr","is_corresponding":false,"name":"Sangwon Park"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kiroong Choe"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1173","time_end":"","time_stamp":"","time_start":"","title":"Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations","uid":"v-short-1173","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1177":{"abstract":"The proliferation of misleading visualizations online, particularly during critical events like public health crises and elections, poses a significant risk of misinformation. This work investigates the capability of GPT-4V to detect misleading visualizations. Utilizing a dataset of tweet-visualization pairs with various visual misleaders, we tested GPT-4V under four experimental conditions: naive zero-shot, naive few-shot, guided zero-shot, and guided few-shot. Our results demonstrate that GPT-4V can detect misleading visualizations with moderate accuracy without prior training (naive zero-shot) and that performance considerably improves by providing the model with the definitions of misleaders (guided zero-shot). However, combining definitions with examples of misleaders (guided few-shot) did not yield further improvements. This study underscores the feasibility of using large vision-language models such as GTP-4V to combat misinformation and emphasizes the importance of optimizing prompt engineering to enhance detection accuracy.","accessible_pdf":false,"authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"jhalexander@umass.edu","is_corresponding":false,"name":"Jason Huang Alexander"},{"affiliations":["University of Masssachusetts Amherst, Amherst, United States"],"email":"phnanda@umass.edu","is_corresponding":false,"name":"Priyal H Nanda"},{"affiliations":["Northeastern University, Boston, United States"],"email":"yangkc@iu.edu","is_corresponding":false,"name":"Kai-Cheng Yang"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"asarv@cs.umass.edu","is_corresponding":true,"name":"Ali Sarvghad"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ali Sarvghad"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1177","time_end":"","time_stamp":"","time_start":"","title":"Can GPT-4V Detect Misleading Visualizations?","uid":"v-short-1177","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1183":{"abstract":"An atmospheric front is an imaginary surface that separates two distinct air masses and is commonly defined as the warm-air side of a frontal zone with high gradients of atmospheric temperature and humidity. These fronts are a widely used conceptual model in meteorology, which are often encountered in the literature as two-dimensional (2D) front lines on surface analysis charts. This paper presents a method for computing three-dimensional (3D) atmospheric fronts as surfaces that is capable of extracting continuous and well-confined features suitable for 3D visual analysis, spatio-temporal tracking, and statistical analyses. Recently developed contour-based methods for 3D front extraction rely on computing the third derivative of a moist potential temperature field. Additionally, they require the field to be smoothed to obtain continuous large-scale structures. This paper demonstrates the feasibility of an alternative method to front extraction using ridge surface computation. The proposed method requires only the sec- ond derivative of the input field and produces accurate structures even from unsmoothed data. An application of the ridge-based method to a data set corresponding to Cyclone Friederike demonstrates its benefits and utility towards visual analysis of the full 3D structure of fronts.","accessible_pdf":false,"authors":[{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"anne.gossing@fu-berlin.de","is_corresponding":true,"name":"Anne Gossing"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"andreas.beckert@uni-hamburg.de","is_corresponding":false,"name":"Andreas Beckert"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"christoph.fischer-1@uni-hamburg.de","is_corresponding":false,"name":"Christoph Fischer"},{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"klenert@zib.de","is_corresponding":false,"name":"Nicolas Klenert"},{"affiliations":["Indian Institute of Science, Bangalore, India"],"email":"vijayn@iisc.ac.in","is_corresponding":false,"name":"Vijay Natarajan"},{"affiliations":["Freie Universit\u00e4t Berlin, Berlin, Germany"],"email":"george.pacey@fu-berlin.de","is_corresponding":false,"name":"George Pacey"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"thorwin.vogt@uni-hamburg.de","is_corresponding":false,"name":"Thorwin Vogt"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"marc.rautenhaus@uni-hamburg.de","is_corresponding":false,"name":"Marc Rautenhaus"},{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"baum@zib.de","is_corresponding":false,"name":"Daniel Baum"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anne Gossing"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1183","time_end":"","time_stamp":"","time_start":"","title":"A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts","uid":"v-short-1183","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1184":{"abstract":"To improve the perception of hierarchical structures in data sets, several color map generation algorithms have been proposed to take this structure into account. But the design of hierarchical color maps elicits different requirements to those of color maps for tabular data. Within this paper, we make an initial effort to put design rules from the color map literature into the context of hierarchical color maps. We investigate the impact of several design decisions and provide recommendations for various analysis scenarios. Thus, we lay the foundation for objective quality criteria to evaluate hierarchical color maps.","accessible_pdf":false,"authors":[{"affiliations":["Fraunhofer IGD, Darmstadt, Germany"],"email":"tobias.mertz@igd.fraunhofer.de","is_corresponding":true,"name":"Tobias Mertz"},{"affiliations":["Fraunhofer IGD, Darmstadt, Germany","TU Darmstadt, Darmstadt, Germany"],"email":"joern.kohlhammer@igd.fraunhofer.de","is_corresponding":false,"name":"J\u00f6rn Kohlhammer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias Mertz"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1184","time_end":"","time_stamp":"","time_start":"","title":"Towards a Quality Approach to Hierarchical Color Maps","uid":"v-short-1184","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1185":{"abstract":"The visualization and interactive exploration of geo-referenced networks poses challenges if the network's nodes are not evenly distributed. Our approach proposes new ways of realizing animated transitions for exploring such networks from an ego-perspective. We aim to reduce the required screen estate while maintaining the viewers' mental map of distances and directions. A preliminary study provides first insights of the comprehensiveness of animated geographic transitions regarding directional relationships between start and end point in different projections. Two use cases showcase how ego-perspective graph exploration can be supported using less screen space than previous approaches.","accessible_pdf":false,"authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"max@mumintroll.org","is_corresponding":true,"name":"Max Franke"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"samuel.beck@vis.uni-stuttgart.de","is_corresponding":false,"name":"Samuel Beck"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"steffen.koch@vis.uni-stuttgart.de","is_corresponding":false,"name":"Steffen Koch"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Max Franke"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1185","time_end":"","time_stamp":"","time_start":"","title":"Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks","uid":"v-short-1185","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1186":{"abstract":"Data visualizations help extract insights from datasets, but reaching these insights requires decomposing high level goals into low-level analytic tasks that can be complex due to varying degrees of data literacy and visualization experience. Recent advancements in large language models (LLMs) have shown promise for lowering barriers for users to achieve tasks such as writing code and may likewise facilitate visualization insight. Scalable Vector Graphics (SVG), a text-based image format common in data visualizations, matches well with the text sequence processing of transformer-based LLMs. In this paper, we explore the capability of LLMs to perform 10 low-level visual analytic tasks defined by Amar, Eagan, and Stasko directly on SVG-based visualizations. Using zero-shot prompts, we instruct the models to provide responses or modify the SVG code based on given visualizations. Our findings demonstrate that LLMs can effectively modify existing SVG visualizations for some tasks like Cluster but perform poorly on tasks requiring mathematical operations like Compute Derived Value. We also discovered that LLM performance can vary based on factors such as the number of data points, the presence of value labels, and the chart type. Our findings contribute to gauging the general capabilities of LLMs and highlight the need for further exploration and development to fully harness their potential in supporting visual analytic tasks.","accessible_pdf":false,"authors":[{"affiliations":["Brown University, Providence, United States"],"email":"leooooxzz@gmail.com","is_corresponding":true,"name":"Zhongzheng Xu"},{"affiliations":["Emory University, Atlanta, United States"],"email":"emily.wall@emory.edu","is_corresponding":false,"name":"Emily Wall"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zhongzheng Xu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1186","time_end":"","time_stamp":"","time_start":"","title":"Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations","uid":"v-short-1186","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1188":{"abstract":"Vortices and their analysis play a critical role in the understanding of complex phenomena in turbulent flow. Traditional vortex extraction methods, notably region-based techniques, often overlook the entanglement phenomenon, resulting in the inclusion of multiple vortices within a single extracted region. Their separation is necessary for quantifying different types of vortices and their statistics. In this study, we propose a novel vortex separation method that extends the conventional contour tree-based segmentation approach with an additional step termed \u201clayering\u201d. Upon extracting a vortical region using specified vortex criteria (e.g., \u03bb2), we initially establish topological segmentation based on the contour tree, followed by the layering process to allocate appropriate segmentation IDs to unsegmented cells, thus separating individual vortices within the region. However, these regions may still suffer from inaccurate splits, which we address statistically by leveraging the continuity of vorticity lines across the split boundaries. Our findings demonstrate a significant improvement in both the separation of vortices and the mitigation of inaccurate splits compared to prior methods.","accessible_pdf":false,"authors":[{"affiliations":["University of Houston, Houston, United States"],"email":"adeelz92@gmail.com","is_corresponding":true,"name":"Adeel Zafar"},{"affiliations":["University of Houston, Houston, United States"],"email":"zpoorsha@cougarnet.uh.edu","is_corresponding":false,"name":"Zahra Poorshayegh"},{"affiliations":["University of Houston, Houston, United States"],"email":"diyang@uh.edu","is_corresponding":false,"name":"Di Yang"},{"affiliations":["University of Houston, Houston, United States"],"email":"chengu@cs.uh.edu","is_corresponding":false,"name":"Guoning Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Adeel Zafar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1188","time_end":"","time_stamp":"","time_start":"","title":"Topological Separation of Vortices","uid":"v-short-1188","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1189":{"abstract":"The information visualization research community commonly produces supporting software to demonstrate technical contributions to the field. However, developing this software tends to be an overwhelming task, the final product tends to be a research prototype without much thought for modularization and re-usability which makes it harder to replicate and adopt. This paper presents a design pattern for facilitating the creation, dissemination, and re-utilization of visualization techniques using reactive widgets. The design pattern features basic concepts that leverage modern front-end development best practices and standards, which ease development and replication. The paper presents several usage examples of the pattern, templates for implementation, and even a wrapper for facilitating the conversion of any Vega specification into a reactive widget.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, San Francisco, United States"],"email":"john.guerra@gmail.com","is_corresponding":true,"name":"John Alexis Guerra-Gomez"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["John Alexis Guerra-Gomez"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1189","time_end":"","time_stamp":"","time_start":"","title":"Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination","uid":"v-short-1189","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1191":{"abstract":"To enable data-driven decision-making across organizations, data professionals need to share insights with their colleagues in context-appropriate communication channels. Many of their colleagues rely on data but are not themselves analysts; furthermore, their colleagues are reluctant or unable to use dedicated analytical applications or dashboards, and they expect communication to take place within threaded collaboration platforms such as Slack or Microsoft Teams. In this paper, we introduce a set of six strategies for adapting content from business intelligence (BI) dashboards into appropriate formats for sharing on collaboration platforms, formats that we refer to as dashboard snapshots. Informed by prior studies of enterprise communication around data, these strategies go beyond redesigning or restyling by considering varying levels of data literacy across an organization, introducing affordances for self-service question-answering, and anticipating the post-sharing lifecycle of data artifacts. These strategies involve the use of templates that are matched to common communicative intents, serving to reduce the workload of data professionals. We contribute a formal representation of these strategies and demonstrate their applicability in a comprehensive enterprise communication scenario featuring multiple stakeholders that unfolds over the span of months.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"hyeokkim2024@u.northwestern.edu","is_corresponding":true,"name":"Hyeok Kim"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"arjun.srinivasan.10@gmail.com","is_corresponding":false,"name":"Arjun Srinivasan"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"mbrehmer@uwaterloo.ca","is_corresponding":false,"name":"Matthew Brehmer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hyeok Kim"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1191","time_end":"","time_stamp":"","time_start":"","title":"Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms","uid":"v-short-1191","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1192":{"abstract":"Narrative visualization has become a crucial tool in data presentation, merging storytelling with data visualization to convey complex information in an engaging and accessible manner. In this study, we review the design space for narrative visualizations, focusing on animation style, through a comprehensive analysis of 71 papers from key visualization venues. We categorize these papers into six broad themes: Animation Style, Interactivity, Technology Usage, Methodology Development, Evaluation Type, and Application Domain. Our findings reveal a significant evolution in the field, marked by a growing preference for animated and non-interactive techniques. This trend reflects a shift towards minimizing user interaction while enhancing the clarity and impact of data presentation. We also identified key trends and technologies that have shaped the field, highlighting the role of technologies, such as machine learning in driving these changes. We offer insights into the dynamic interrelations within the narrative visualization domain, suggesting a future research trajectory that balances interactivity with automated tools to foster increased engagement. Our work lays the groundwork for future approaches for effective and innovative narrative visualization in diverse applications.","accessible_pdf":false,"authors":[{"affiliations":["Louisiana State University, Baton Rouge, United States"],"email":"jyang44@lsu.edu","is_corresponding":true,"name":"Vyri Junhan Yang"},{"affiliations":["Louisiana State University, Baton Rouge, United States"],"email":"mjasim@lsu.edu","is_corresponding":false,"name":"Mahmood Jasim"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Vyri Junhan Yang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1192","time_end":"","time_stamp":"","time_start":"","title":"Animating the Narrative: A Review of Animation Styles in Narrative Visualization","uid":"v-short-1192","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1193":{"abstract":"We present LinkQ, a system that leverages a large language model (LLM) to facilitate knowledge graph (KG) query construction through natural language question-answering. Traditional approaches often require detailed knowledge of complex graph querying languages, limiting the ability for users -- even experts -- to acquire valuable insights from KG data. LinkQ simplifies this process by first interpreting a user's question, then converting it into a well-formed KG query. By using the LLM to construct a query instead of directly answering the user's question, LinkQ guards against the LLM hallucinating or generating false, erroneous information. By integrating an LLM into LinkQ, users are able to conduct both exploratory and confirmatory data analysis, with the LLM helping to iteratively refine open-ended questions into precise ones. To demonstrate the efficacy of LinkQ, we conducted a qualitative study with five KG practitioners and distill their feedback. Our results indicate that practitioners find LinkQ effective for KG question-answering, and desire future LLM-assisted systems for the exploratory analysis of graph databases.","accessible_pdf":false,"authors":[{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"harry.li@ll.mit.edu","is_corresponding":true,"name":"Harry Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"ashley.suh@ll.mit.edu","is_corresponding":false,"name":"Ashley Suh"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Harry Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1193","time_end":"","time_stamp":"","time_start":"","title":"LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering","uid":"v-short-1193","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1199":{"abstract":"In the digital landscape, the ubiquity of data visualizations in media underscores the necessity for accessibility to ensure inclusivity for all users, including those with visual impairments. Current visual content often fails to cater to the needs of screen reader users due to the absence of comprehensive textual descriptions. To address this gap, we propose in this paper a framework designed to empower media content creators to transform charts into descriptive narratives. This tool not only facilitates the understanding of complex visual data through text but also fosters a broader awareness of accessibility in digital content creation. Through the application of this framework, users can interpret and convey the insights of data visualizations more effectively, accommodating a diverse audience. Our evaluations reveal that this tool not only enhances the comprehension of data visualizations but also promotes new perspectives on the represented data, thereby broadening the interpretative possibilities for all users.","accessible_pdf":false,"authors":[{"affiliations":["Polytechnique Montr\u00e9al, Montr\u00e9al, Canada"],"email":"qiangxu1204@gmail.com","is_corresponding":true,"name":"Qiang Xu"},{"affiliations":["Polytechnique Montreal, Montreal, Canada"],"email":"thomas.hurtut@polymtl.ca","is_corresponding":false,"name":"Thomas Hurtut"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qiang Xu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1199","time_end":"","time_stamp":"","time_start":"","title":"From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions","uid":"v-short-1199","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1207":{"abstract":"An essential task of an air traffic controller is to manage the traffic flow by predicting future trajectories. Complex traffic patterns are difficult to predict and manage and impose cognitive load on the air traffic controllers. In this work we present an interactive visual analytics interface which facilitates detection and resolution of complex traffic patterns for air traffic controllers. The interface supports the users in detecting complex clusters of aircraft and uses visual representations to communicate to the controllers how and propose re-routing. The interface further enables the ATCos to visualize and simultaneously compare how different re-routing strategies for each individual aircraft yield reduction of complexity in the entire sector for the next hour. The development of the concepts was supported by the domain-specific feedback we received from six fully licensed and operational air traffic controllers in an iterative design process over a period of 14 months.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"elmira.zohrevandi@liu.se","is_corresponding":true,"name":"Elmira Zohrevandi"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"katerina.vrotsou@liu.se","is_corresponding":false,"name":"Katerina Vrotsou"},{"affiliations":["Institute of Science and Technology, Norrk\u00f6ping, Sweden","Institute of Science and Technology, Norrk\u00f6ping, Sweden"],"email":"carl.westin@liu.se","is_corresponding":false,"name":"Carl A. L. Westin"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"jonas.lundberg@liu.se","is_corresponding":false,"name":"Jonas Lundberg"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"anders.ynnerman@liu.se","is_corresponding":false,"name":"Anders Ynnerman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Elmira Zohrevandi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1207","time_end":"","time_stamp":"","time_start":"","title":"Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity","uid":"v-short-1207","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1211":{"abstract":"Transfer function design is crucial in volume rendering, as it directly influences the visual representation and interpretation of volumetric data. However, creating effective transfer functions that align with users\u2019 visual objectives is often challenging due to the complex parameter space and the semantic gap between transfer function values and features of interest within the volume. In this work, we propose a novel approach that leverages recent advancements in language-vision models to bridge this semantic gap. By employing a fully differentiable rendering pipeline and an image-based loss function guided by language descriptions, our method generates transfer functions that yield volume-rendered images closely matching the user\u2019s intent. We demonstrate the effectiveness of our approach in creating meaningful transfer functions from simple descriptions, empowering users to intuitively express their desired visual outcomes with minimal effort. This advancement streamlines the transfer function design process and makes volume rendering more accessible to a broader range of users.","accessible_pdf":false,"authors":[{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"sangwon.jeong@vanderbilt.edu","is_corresponding":true,"name":"Sangwon Jeong"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["Lawrence Livermore National Laboratory , Livermore, United States"],"email":"shusenl@sci.utah.edu","is_corresponding":false,"name":"Shusen Liu"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"matthew.berger@vanderbilt.edu","is_corresponding":false,"name":"Matthew Berger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sangwon Jeong"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1211","time_end":"","time_stamp":"","time_start":"","title":"Text-based transfer function design for semantic volume rendering","uid":"v-short-1211","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1224":{"abstract":"Diffusion-based generative models\u2019 impressive ability to create convincing images has garnered global attention. However, their complex structures and operations often pose challenges for non-experts to grasp. We present Diffusion Explainer, the first interactive visualization tool that explains how Stable Diffusion transforms text prompts into images. Diffusion Explainer tightly integrates a visual overview of Stable Diffusion\u2019s complex structure with explanations of the underlying operations. By comparing image generation of prompt variants, users can discover the impact of keyword changes on image generation. A 56-participant user study demonstrates that Diffusion Explainer offers substantial learning benefits to non-experts. Our tool has been used by over 10,300 users from 124 countries at https://poloclub.github.io/diffusion-explainer/.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"seongmin@gatech.edu","is_corresponding":true,"name":"Seongmin Lee"},{"affiliations":["GA Tech, Atlanta, United States","IBM Research AI, Cambridge, United States"],"email":"benjamin.hoover@ibm.com","is_corresponding":false,"name":"Benjamin Hoover"},{"affiliations":["IBM Research AI, Cambridge, United States"],"email":"hendrik@strobelt.com","is_corresponding":false,"name":"Hendrik Strobelt"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"jayw@gatech.edu","is_corresponding":false,"name":"Zijie J. Wang"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"speng65@gatech.edu","is_corresponding":false,"name":"ShengYun Peng"},{"affiliations":["Georgia Institute of Technology , Atlanta , United States"],"email":"apwright@gatech.edu","is_corresponding":false,"name":"Austin P Wright"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"kevin.li@gatech.edu","is_corresponding":false,"name":"Kevin Li"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"haekyu@gatech.edu","is_corresponding":false,"name":"Haekyu Park"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"alexanderyang@gatech.edu","is_corresponding":false,"name":"Haoyang Yang"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"polo@gatech.edu","is_corresponding":false,"name":"Duen Horng (Polo) Chau"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Seongmin Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1224","time_end":"","time_stamp":"","time_start":"","title":"Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion","uid":"v-short-1224","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1235":{"abstract":"A high number of samples often leads to occlusion in scatterplots, which hinders data perception and analysis. De-cluttering approaches based on spatial transformation reduce visual clutter by remapping samples using the entire available scatterplot domain. Such regularized scatterplots may still be used for data analysis tasks, if the spatial transformation is smooth and preserves the original neighborhood relations of samples. Recently, Rave et al. proposed an efficient regularization method based on integral images. We propose a generalization of their regularization scheme using sector-based transformations with the aim of increasing sample uniformity of the resulting scatterplot. We document the improvement of our approach using various uniformity measures.","accessible_pdf":false,"authors":[{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"hennes.rave@uni-muenster.de","is_corresponding":true,"name":"Hennes Rave"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"molchano@uni-muenster.de","is_corresponding":false,"name":"Vladimir Molchanov"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"linsen@uni-muenster.de","is_corresponding":false,"name":"Lars Linsen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hennes Rave"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1235","time_end":"","time_stamp":"","time_start":"","title":"Uniform Sample Distribution in Scatterplots via Sector-based Transformation","uid":"v-short-1235","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1236":{"abstract":"Automatically generating data visualizations in response to human utterances on datasets necessitates a deep semantic understanding of the data utterance, including implicit and explicit references to data attributes, visualization tasks, and necessary data preparation steps. Natural Language Interfaces (NLIs) for data visualization have explored ways to infer such information, yet challenges persist due to inherent uncertainty in human speech. Recent advances in Large Language Models (LLMs) provide an avenue to address these challenges, but their ability to extract the relevant semantic information remains unexplored. In this study, we evaluate four publicly available LLMs (GPT-4, Gemini-Pro, Llama3, and Mixtral), investigating their ability to comprehend utterances even in the presence of uncertainty and identify the relevant data context and visual tasks. Our findings reveal that LLMs are sensitive to uncertainties in utterance. Despite this sensitivity, they are able to extract the relevant data context. However, LLMs struggle with inferring visualization tasks. Based on these results, we highlight future research directions on using LLMs for visualization generation. Our supplementary materials have been shared on OSF: https://osf.io/j342a/wiki/home/?view_only=b4051ffc6253496d9bce818e4a89b9f9","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"hbako@umd.edu","is_corresponding":true,"name":"Hannah K. Bako"},{"affiliations":["University of Maryland, College Park, United States"],"email":"arshnoorbhutani8@gmail.com","is_corresponding":false,"name":"Arshnoor Bhutani"},{"affiliations":["The University of Texas at Austin, Austin, United States"],"email":"xinyi.liu@utexas.edu","is_corresponding":false,"name":"Xinyi Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"kcobbina@cs.umd.edu","is_corresponding":false,"name":"Kwesi Adu Cobbina"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hannah K. Bako"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1236","time_end":"","time_stamp":"","time_start":"","title":"Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization","uid":"v-short-1236","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1248":{"abstract":"Statistical practices such as building regression models or running hypothesis tests rely on following rigorous procedures of steps and verifying assumptions on data to produce valid results. However, common statistical tools do not verify users\u2019 decision choices and provide low-level statistical functions without instructions on the whole analysis practice. Users can easily misuse analysis methods, potentially decreasing the validity of results. To address this problem, we introduce GuidedStats, an interactive interface within computational notebooks that encapsulates guidance, models, visualization, and exportable results into interactive workflows. It breaks down typical analysis processes, such as linear regression and two-sample T-tests, into interactive steps supplemented with automatic visualizations and explanations for step-wise evaluation. Users can iterate on input choices to refine their models, while recommended actions and exports allow the user to continue their analysis in code. Case studies show how GuidedStats offers valuable instructions for conducting fluid statistical analyses while finding possible assumption violations in the underlying data, supporting flexible and accurate statistical analyses.","accessible_pdf":false,"authors":[{"affiliations":["New York University, New York, United States"],"email":"yz9381@nyu.edu","is_corresponding":true,"name":"Yuqi Zhang"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"adamperer@cmu.edu","is_corresponding":false,"name":"Adam Perer"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"willepp@cmu.edu","is_corresponding":false,"name":"Will Epperson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuqi Zhang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1248","time_end":"","time_stamp":"","time_start":"","title":"Guided Statistical Workflows with Interactive Explanations and Assumption Checking","uid":"v-short-1248","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1264":{"abstract":"The Local Moran's I statistic is a valuable tool for identifying localized patterns of spatial autocorrelation. Understanding these patterns is crucial in spatial analysis, but interpreting the statistic can be difficult. To simplify this process, we introduce three novel visualizations that enhance the interpretation of Local Moran's I results. These visualizations can be interactively linked to one another, and to established visualizations, to offer a more holistic exploration of the results. We provide a JavaScript library with implementations of these new visual elements, along with a web dashboard that demonstrates their integrated use.","accessible_pdf":false,"authors":[{"affiliations":["NIH, Rockville, United States","Queen's University, Belfast, United Kingdom"],"email":"masonlk@nih.gov","is_corresponding":true,"name":"Lee Mason"},{"affiliations":["Queen's University Belfast , Belfast , United Kingdom"],"email":"b.hicks@qub.ac.uk","is_corresponding":false,"name":"Bl\u00e1naid Hicks"},{"affiliations":["National Institutes of Health, Rockville, United States"],"email":"jonas.dealmeida@nih.gov","is_corresponding":false,"name":"Jonas S Almeida"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lee Mason"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1264","time_end":"","time_stamp":"","time_start":"","title":"Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation","uid":"v-short-1264","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1274":{"abstract":"This study examines the impact of positive and negative contrast polarities (i.e., light and dark modes) on the performance of younger adults and people in their late adulthood (PLA). In a crowdsourced study with 134 participants (69 below age 60, 66 aged 60 and above), we assessed their accuracy and time performing analysis tasks across three common visualization types (Bar, Line, Scatterplot) and two contrast polarities (positive and negative). We observed that, across both age groups, the polarity that led to better performance and the resulting amount of improvement varied on an individual basis, with each polarity benefiting comparable proportions of participants. Additionally, we observed that the choice of contrast polarity can have an impact on time similar to that of the choice of visualization type, resulting in an average percent difference of around 36%. These findings indicate that, overall, the effects of contrast polarity on visual analysis performance do not noticeably change with age. Furthermore, they underscore the importance of making visualizations available in both contrast polarities to better-support a broad audience with differing needs.","accessible_pdf":false,"authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"zwhile@cs.umass.edu","is_corresponding":true,"name":"Zack While"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"asarv@cs.umass.edu","is_corresponding":false,"name":"Ali Sarvghad"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zack While"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1274","time_end":"","time_stamp":"","time_start":"","title":"Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups","uid":"v-short-1274","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1276":{"abstract":"Machine Learning models for chart-grounded Q&A (CQA) often treat charts as images, but performing CQA on pixel values has proven challenging. We thus investigate a resource overlooked by current ML-based approaches: the declarative documents describing how charts should visually encode data (i.e., chart specifications). In this work, we use chart specifications to enhance language models (LMs) for chart-reading tasks, such that the resulting system can robustly understand language for CQA. Through a case study with 359 bar charts, we test novel fine tuning schemes on both GPT-3 and T5 using a new dataset curated for two CQA tasks: question-answering and visual explanation generation. Our text-only approaches strongly outperform vision-based GPT-4 on explanation generation (99% vs. 63% accuracy), and show promising results for question-answering (57-67% accuracy). Through in-depth experiments, we also show that our text-only approaches are mostly robust to natural language variation.","accessible_pdf":false,"authors":[{"affiliations":["Adobe Research, San Jose, United States"],"email":"victorbursztyn2022@u.northwestern.edu","is_corresponding":true,"name":"Victor S. Bursztyn"},{"affiliations":["Adobe Research, Seattle, United States"],"email":"jhoffs@adobe.com","is_corresponding":false,"name":"Jane Hoffswell"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"sguo@adobe.com","is_corresponding":false,"name":"Shunan Guo"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"eunyee@adobe.com","is_corresponding":false,"name":"Eunyee Koh"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Victor S. Bursztyn"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1276","time_end":"","time_stamp":"","time_start":"","title":"Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts","uid":"v-short-1276","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1277":{"abstract":"Trust is a subjective yet fundamental component of human-computer interaction, and is a determining factor in shaping the efficacy of data visualizations. Prior research has identified five dimensions of trust assessment in visualizations (credibility, clarity, reliability, familiarity, and confidence), and observed that these dimensions tend to vary predictably along with certain features of the visualization being evaluated. This raises a further question: how do the design features driving viewers' trust assessment vary with the characteristics of the viewers themselves? By reanalyzing data from these studies through the lens of individual differences, we build a more detailed map of the relationships between design features, individual characteristics, and trust behaviors. In particular, we model the distinct contributions of endogenous design features (such as visualization type, or the use of color) and exogenous user characteristics (such as visualization literacy), as well as the interactions between them. We then use these findings to make recommendations for individualized and adaptive visualization design.","accessible_pdf":false,"authors":[{"affiliations":["Smith College, Northampton, United States"],"email":"jcrouser@smith.edu","is_corresponding":true,"name":"R. Jordan Crouser"},{"affiliations":["Smith College, Northampton, United States"],"email":"cmatoussi@smith.edu","is_corresponding":false,"name":"Syrine Matoussi"},{"affiliations":["Smith College, Northampton, United States"],"email":"ekung@smith.edu","is_corresponding":false,"name":"Lan Kung"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"p.saugat@wustl.edu","is_corresponding":false,"name":"Saugat Pandey"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"m.oen@wustl.edu","is_corresponding":false,"name":"Oen G McKinley"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"alvitta@wustl.edu","is_corresponding":false,"name":"Alvitta Ottley"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["R. Jordan Crouser"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1277","time_end":"","time_stamp":"","time_start":"","title":"Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization","uid":"v-short-1277","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1285":{"abstract":"This study examines the impact of social-comparison risk visualizations on public health communication, comparing the effects of traditional bar charts against alternative jitter plots emphasizing geographic variability (geo jitter). The research highlights that whereas both visualization types increased perceived vulnerability, behavioral intent, and policy support, the geo jitter plots were significantly more effective in reducing unjustified personal attributions. Importantly, the findings also underscore the emotional challenges faced by visualization viewers from marginalized communities, indicating a need for designs that are sensitive to the potential for reinforcing stereotypes or eliciting negative emotions. This work suggests a strategic reevaluation of visual communication tools in public health to enhance understanding and engagement without contributing to negative attributions or emotional distress.","accessible_pdf":false,"authors":[{"affiliations":["3iap, Raleigh, United States"],"email":"eli@3iap.com","is_corresponding":false,"name":"Eli Holder"},{"affiliations":["Northeastern University, Boston, United States","University of California Merced, Merced, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":true,"name":"Lace M. Padilla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lace M. Padilla"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1285","time_end":"","time_stamp":"","time_start":"","title":"\"Must Be a Tuesday\": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities","uid":"v-short-1285","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1292":{"abstract":"Collaborative planning for congenital heart diseases typically involves creating physical heart models through 3D printing, which are then examined by both surgeons and cardiologists. Recent developments in mobile augmented reality (AR) technologies have presented a viable alternative, known for their ease of use and portability. However, there is still a lack of research examining the utilization of multi-user mobile AR environments to support collaborative planning for cardiovascular surgeries. We created ARCollab, an iOS AR app designed for enabling multiple surgeons and cardiologists to interact with a patient's 3D heart model in a shared environment. ARCollab enables surgeons and cardiologists to import heart models, manipulate them through gestures and collaborate with other users, eliminating the need for fabricating physical heart models. Our evaluation of ARCollab's usability and usefulness in enhancing collaboration, conducted with three cardiothoracic surgeons and two cardiologists, marks the first human evaluation of a multi-user mobile AR tool for surgical planning. ARCollab is open-source, available at https://github.com/poloclub/arcollab.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"pratham.mehta001@gmail.com","is_corresponding":true,"name":"Pratham Darrpan Mehta"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"rnarayanan39@gatech.edu","is_corresponding":false,"name":"Rahul Ozhur Narayanan"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"harsha5431@gmail.com","is_corresponding":false,"name":"Harsha Karanth"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"alexanderyang@gatech.edu","is_corresponding":false,"name":"Haoyang Yang"},{"affiliations":["Emory University, Atlanta, United States"],"email":"slesnickt@kidsheart.com","is_corresponding":false,"name":"Timothy C Slesnick"},{"affiliations":["Emory University/Children's Healthcare of Atlanta, Atlanta, United States"],"email":"fawwaz.shaw@choa.org","is_corresponding":false,"name":"Fawwaz Shaw"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"polo@gatech.edu","is_corresponding":false,"name":"Duen Horng (Polo) Chau"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Pratham Darrpan Mehta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1292","time_end":"","time_stamp":"","time_start":"","title":"Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning","uid":"v-short-1292","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1301":{"abstract":"Reactionary delay'' is a result of the accumulated cascading effects of knock-on train delays. It is becoming an increasing problem as shared railway infrastructure becomes more crowded. The chaotic nature of its effects is notoriously hard to predict. We use a stochastic Monte-Carto-style simulation of reactionary delay that produces whole distributions of likely reactionary delay. Our contribution is the demonstrating how Zoomable GlyphTables -- case-by-variable tables in which cases are rows, variables are columns, variables are complex composite metrics that incorporate distributions, and cells contain mini-charts that depict these as different level of detail through zoom interaction -- help interpret these results for helping understanding the causes and effects of reactionary delay and how they have been informing timetable robustness testing and tweaking. We describe our design principles, demonstrate how this supported our analytical tasks and we reflect on wider potential for Zoomable GlyphTables to be used more widely.","accessible_pdf":false,"authors":[{"affiliations":["City, University of London, London, United Kingdom"],"email":"a.slingsby@city.ac.uk","is_corresponding":true,"name":"Aidan Slingsby"},{"affiliations":["Risk Solutions, Warrington, United Kingdom"],"email":"jonathan.hyde@risksol.co.uk","is_corresponding":false,"name":"Jonathan Hyde"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Aidan Slingsby"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1301","time_end":"","time_stamp":"","time_start":"","title":"Zoomable Glyph Tables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays","uid":"v-short-1301","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20223193756":{"abstract":"Information visualization uses various types of representations to encode data into graphical formats. Prior work on visualization techniques has evaluated the accuracy of perceived numerical data values from visual data encodings such as graphical position, length, orientation, size, and color. Our work aims to extend the research of graphical perception to the use of motion as data encodings for quantitative values. We present two experiments implementing multiple fundamental aspects of motion such as type, speed, and synchronicity that can be used for numerical value encoding as well as comparing motion to static visual encodings in terms of user perception and accuracy. We studied how well users can assess the differences between several types of motion and static visual encodings and present an updated ranking of accuracy for quantitative judgments. Our results indicate that non-synchronized motion can be interpreted more quickly and more accurately than synchronized motion. Moreover, our ranking of static and motion visual representations shows that motion, especially expansion and translational types, has great potential as a data encoding technique for quantitative value. Finally, we discuss the implications for the use of animation and motion for numerical representations in data visualization.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shaghayegh Esmaeili"],"doi":"10.1109/TVCG.2022.3193756","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Information visualization, animation and motion-related techniques, empirical study, graphical perception, evaluation."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20223193756","time_end":"","time_stamp":"","time_start":"","title":"Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding","uid":"v-tvcg-20223193756","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20223229017":{"abstract":"We present V-Mail, a framework of cross-platform applications, interactive techniques, and communication protocols for improved multi-person correspondence about spatial 3D datasets. Inspired by the daily use of e-mail, V-Mail seeks to enable a similar style of rapid, multi-person communication accessible on any device; however, it aims to do this in the new context of spatial 3D communication, where limited access to 3D graphics hardware typically prevents such communication. The approach integrates visual data storytelling with data exploration, spatial annotations, and animated transitions. V-Mail ``data stories'' are exported in a standard video file format to establish a common baseline level of access on (almost) any device. The V-Mail framework also includes a series of complementary client applications and plugins that enable different degrees of story co-authoring and data exploration, adjusted automatically to match the capabilities of various devices. A lightweight, phone-based V-Mail app makes it possible to annotate data by adding captions to the video. These spatial annotations are then immediately accessible to team members running high-end 3D graphics visualization systems that also include a V-Mail client, implemented as a plugin. Results and evaluation from applying V-Mail to assist communication within an interdisciplinary science team studying Antarctic ice sheets confirm the utility of the asynchronous, cross-platform collaborative framework while also highlighting some current limitations and opportunities for future work.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jung Who Nam"],"doi":"10.1109/TVCG.2022.3229017","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Human-computer interaction, visualization of scientific 3D data, communication, storytelling, immersive analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20223229017","time_end":"","time_stamp":"","time_start":"","title":"V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices","uid":"v-tvcg-20223229017","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233261320":{"abstract":"In recent years, narrative visualization has gained much attention. Researchers have proposed different design spaces for various narrative visualization genres and scenarios to facilitate the creation process. As users' needs grow and automation technologies advance, increasingly more tools have been designed and developed. In this study, we summarized six genres of narrative visualization (annotated charts, infographics, timelines & storylines, data comics, scrollytelling & slideshow, and data videos) based on previous research and four types of tools (design spaces, authoring tools, ML/AI-supported tools and ML/AI-generator tools) based on the intelligence and automation level of the tools. We surveyed 105 papers and tools to study how automation can progressively engage in visualization design and narrative processes to help users easily create narrative visualizations. This research aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qing Chen"],"doi":"10.1109/TVCG.2023.3261320","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Automatic Visualization, Narrative Visualization, Design Space, Authoring Tools, Survey"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233261320","time_end":"","time_stamp":"","time_start":"","title":"How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools","uid":"v-tvcg-20233261320","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233275925":{"abstract":"A contiguous area cartogram is a geographic map in which the area of each region is proportional to numerical data (e.g., population size) while keeping neighboring regions connected. In this study, we investigated whether value-to-area legends (square symbols next to the values represented by the squares' areas) and grid lines aid map readers in making better area judgments. We conducted an experiment to determine the accuracy, speed, and confidence with which readers infer numerical data values for the mapped regions. We found that, when only informed about the total numerical value represented by the whole cartogram without any legend, the distribution of estimates for individual regions was centered near the true value with substantial spread. Legends with grid lines significantly reduced the spread but led to a tendency to underestimate the values. Comparing differences between regions or between cartograms revealed that legends and grid lines slowed the estimation without improving accuracy. However, participants were more likely to complete the tasks when legends and grid lines were present, particularly when the area units represented by these features could be interactively selected. We recommend considering the cartogram's use case and purpose before deciding whether to include grid lines or an interactive legend.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Gastner"],"doi":"10.1109/TVCG.2023.3275925","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Task Analysis, Symbols, Data Visualization, Sociology, Visualization, Switches, Mice, Cartogram, Geovisualization, Interactive Data Exploration, Quantitative Evaluation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233275925","time_end":"","time_stamp":"","time_start":"","title":"Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms","uid":"v-tvcg-20233275925","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233287585":{"abstract":"Data visualization and journalism are deeply connected. From early infographics to recent data-driven storytelling, visualization has become an integrated part of contemporary journalism, primarily as a communication artifact to inform the general public. Data journalism, harnessing the power of data visualization, has emerged as a bridge between the growing volume of data and our society. Visualization research that centers around data storytelling has sought to understand and facilitate such journalistic endeavors. However, a recent metamorphosis in journalism has brought broader challenges and opportunities that extend beyond mere communication of data. We present this article to enhance our understanding of such transformations and thus broaden visualization research's scope and practical contribution to this evolving field. We first survey recent significant shifts, emerging challenges, and computational practices in journalism. We then summarize six roles of computing in journalism and their implications. Based on these implications, we provide propositions for visualization research concerning each role. Ultimately, by mapping the roles and propositions onto a proposed ecological model and contextualizing existing visualization research, we surface seven general topics and a series of research agendas that can guide future visualization research at this intersection.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yu Fu"],"doi":"10.1109/TVCG.2023.3287585","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Computational journalism, data visualization, data-driven storytelling, journalism"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233287585","time_end":"","time_stamp":"","time_start":"","title":"More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism","uid":"v-tvcg-20233287585","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233289292":{"abstract":"Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Cindy Xiong Bearfield"],"doi":"10.1109/TVCG.2023.3289292","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["comparison, perception, visual grouping, bar charts, verbal conclusions."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233289292","time_end":"","time_stamp":"","time_start":"","title":"What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts","uid":"v-tvcg-20233289292","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233299602":{"abstract":"Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sungwon In"],"doi":"10.1109/TVCG.2023.3299602","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Immersive Analytics, Data Transformation, Data Science, Interaction, Empirical Study, Virtual/Augmented/Mixed Reality"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233299602","time_end":"","time_stamp":"","time_start":"","title":"This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality","uid":"v-tvcg-20233299602","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233302308":{"abstract":"We visualize the predictions of multiple machine learning models to help biologists as they interactively make decisions about cell lineage---the development of a (plant) embryo from a single ovum cell. Based on a confocal microscopy dataset, traditionally biologists manually constructed the cell lineage, starting from this observation and reasoning backward in time to establish their inheritance. To speed up this tedious process, we make use of machine learning (ML) models trained on a database of manually established cell lineages to assist the biologist in cell assignment. Most biologists, however, are not familiar with ML, nor is it clear to them which model best predicts the embryo's development. We thus have developed a visualization system that is designed to support biologists in exploring and comparing ML models, checking the model predictions, detecting possible ML model mistakes, and deciding on the most likely embryo development. To evaluate our proposed system, we deployed our interface with six biologists in an observational study. Our results show that the visual representations of machine learning are easily understandable, and our tool, LineageD+, could potentially increase biologists' working efficiency and enhance the understanding of embryos.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jiayi Hong"],"doi":"10.1109/TVCG.2023.3302308","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, visual analytics, machine learning, comparing ML predictions, human-AI teaming, plant biology, cell lineage"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233302308","time_end":"","time_stamp":"","time_start":"","title":"Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage","uid":"v-tvcg-20233302308","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233306356":{"abstract":"A multitude of studies have been conducted on graph drawing, but many existing methods only focus on optimizing a single aesthetic aspect of graph layouts. There are a few existing methods that attempt to develop a flexible solution for optimizing different aesthetic aspects measured by different aesthetic criteria. Furthermore, thanks to the significant advance in deep learning techniques, several deep learning-based layout methods were proposed recently, which have demonstrated the advantages of the deep learning approaches for graph drawing. However, none of these existing methods can be directly applied to optimizing non-differentiable criteria without special accommodation. In this work, we propose a novel Generative Adversarial Network (GAN) based deep learning framework for graph drawing, called SmartGD, which can optimize any quantitative aesthetic goals even though they are non-differentiable. In the cases where the aesthetic goal is too abstract to be described mathematically, SmartGD can draw graphs in a similar style as a collection of good layout examples, which might be selected by humans based on the abstract aesthetic goal. To demonstrate the effectiveness and efficiency of SmartGD, we conduct experiments on minimizing stress, minimizing edge crossing, maximizing crossing angle, and a combination of multiple aesthetics. Compared with several popular graph drawing algorithms, the experimental results show that SmartGD achieves good performance both quantitatively and qualitatively.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xiaoqi Wang"],"doi":"10.1109/TVCG.2023.3306356","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233306356","time_end":"","time_stamp":"","time_start":"","title":"SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals","uid":"v-tvcg-20233306356","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233310019":{"abstract":"The dynamic network visualization design space consists of two major dimensions: network structural and temporal representation. As more techniques are developed and published, a clear need for evaluation and experimental comparisons between them emerges. Most studies explore the temporal dimension and diverse interaction techniques supporting the participants, focusing on a single structural representation. Empirical evidence about performance and preference for different visualization approaches is scattered over different studies, experimental settings, and tasks. This paper aims to comprehensively investigate the dynamic network visualization design space in two evaluations. First, a controlled study assessing participants' response times, accuracy, and preferences for different combinations of network structural and temporal representations on typical dynamic network exploration tasks, with and without the support of standard interaction methods. Second, the best-performing combinations from the first study are enhanced based on participants' feedback and evaluated in a heuristic-based qualitative study with visualization experts on a real-world network. Our results highlight node-link with animation and playback controls as the best-performing combination and the most preferred based on ratings. Matrices achieve similar performance to node-link in the first study but have considerably lower scores in our second evaluation. Similarly, juxtaposition exhibits evident scalability issues in more realistic analysis contexts.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Velitchko Filipov"],"doi":"10.1109/TVCG.2023.3310019","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233310019","time_end":"","time_stamp":"","time_start":"","title":"On Network Structural and Temporal Encodings: A Space and Time Odyssey","uid":"v-tvcg-20233310019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233316469":{"abstract":"Automated visualization recommendation facilitates the rapid creation of effective visualizations, which is especially beneficial for users with limited time and limited knowledge of data visualization. There is an increasing trend in leveraging machine learning (ML) techniques to achieve an end-to-end visualization recommendation. However, existing ML-based approaches implicitly assume that there is only one appropriate visualization for a specific dataset, which is often not true for real applications. Also, they often work like a black box, and are difficult for users to understand the reasons for recommending specific visualizations. To fill the research gap, we propose AdaVis, an adaptive and explainable approach to recommend one or multiple appropriate visualizations for a tabular dataset. It leverages a box embedding-based knowledge graph to well model the possible one-to-many mapping relations among different entities (i.e., data features, dataset columns, datasets, and visualization choices). The embeddings of the entities and relations can be learned from dataset-visualization pairs. Also, AdaVis incorporates the attention mechanism into the inference framework. Attention can indicate the relative importance of data features for a dataset and provide fine-grained explainability. Our extensive evaluations through quantitative metric evaluations, case studies, and user interviews demonstrate the effectiveness of AdaVis.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Songheng Zhang"],"doi":"10.1109/TVCG.2023.3316469","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization Recommendation, Logical Reasoning, Data Visualization, Knowledge Graph"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233316469","time_end":"","time_stamp":"","time_start":"","title":"AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'","uid":"v-tvcg-20233316469","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233322372":{"abstract":"Visualization linting is a proven effective tool in assisting users to follow established visualization guidelines. Despite its success, visualization linting for choropleth maps, one of the most popular visualizations on the internet, has yet to be investigated. In this paper, we present GeoLinter, a linting framework for choropleth maps that assists in creating accurate and robust maps. Based on a set of design guidelines and metrics drawing upon a collection of best practices from the cartographic literature, GeoLinter detects potentially suboptimal design decisions and provides further recommendations on design improvement with explanations at each step of the design process. We perform a validation study to evaluate the proposed framework's functionality with respect to identifying and fixing errors and apply its results to improve the robustness of GeoLinter. Finally, we demonstrate the effectiveness of the GeoLinter - validated through empirical studies - by applying it to a series of case studies using real-world datasets.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arlen Fan"],"doi":"10.1109/TVCG.2023.3322372","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization , Image color analysis , Geology , Recommender systems , Guidelines , Bars , Visualization Author Keywords: Automated visualization design , choropleth maps , visualization linting , visualization recommendation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233322372","time_end":"","time_stamp":"","time_start":"","title":"GeoLinter: A Linting Framework for Choropleth Maps","uid":"v-tvcg-20233322372","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233322898":{"abstract":"Visual and interactive machine learning systems (IML) are becoming ubiquitous as they empower individuals with varied machine learning expertise to analyze data. However, it remains complex to align interactions with visual marks to a user\u2019s intent for steering machine learning models. We explore using data and visual design probes to elicit users\u2019 desired interactions to steer ML models via visual encodings within IML interfaces. We conducted an elicitation study with 20 data analysts with varying expertise in ML. We summarize our findings as pairs of target-interaction, which we compare to prior systems to assess the utility of the probes. We additionally surfaced insights about factors influencing how and why participants chose to interact with visual encodings, including refraining from interacting. Finally, we reflect on the value of gathering such formative empirical evidence via data and visual design probes ahead of developing IML prototypes. ","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anamaria Crisan"],"doi":"10.1109/TVCG.2023.3322898","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Design Probes, Interactive Machine Learning, Model Steering, Semantic Interaction"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233322898","time_end":"","time_stamp":"","time_start":"","title":"Eliciting Model Steering Interactions from Users via Data and Visual Design Probes","uid":"v-tvcg-20233322898","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233323150":{"abstract":"We examined user preferences to combine multiple interaction modalities for collaborative interaction with data shown on large vertical displays. Large vertical displays facilitate visual data exploration and allow the use of diverse interaction modalities by multiple users at different distances from the screen. Yet, how to offer multiple interaction modalities is a non-trivial problem. We conducted an elicitation study with 20 participants that generated 1015 interaction proposals combining touch, speech, pen, and mid-air gestures. Given the opportunity to interact using these four 13 modalities, participants preferred speech interaction in 10 of 15 14 low-level tasks and direct manipulation for straightforward tasks 15 such as showing a tooltip or selecting. In contrast to previous work, 16 participants most favored unimodal and personal interactions. We 17 identified what we call collaborative synonyms among their interaction proposals and found that pairs of users collaborated either unimodally and simultaneously or multimodally and sequentially. We provide insights into how end-users associate visual exploration tasks with certain modalities and how they collaborate at different interaction distances using specific interaction modalities. The supplemental material is available at https://osf.io/m8zuh.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gabriela Molina Le\u00f3n"],"doi":"10.1109/TVCG.2023.3323150","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Multimodal interaction, collaborative work, large vertical displays, elicitation study, spatio-temporal data"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233323150","time_end":"","time_stamp":"","time_start":"","title":"Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays","uid":"v-tvcg-20233323150","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233324851":{"abstract":"Dimensionality reduction (DR) algorithms are diverse and widely used for analyzing high-dimensional data. Various metrics and tools have been proposed to evaluate and interpret the DR results. However, most metrics and methods fail to be well generalized to measure any DR results from the perspective of original distribution fidelity or lack interactive exploration of DR results. There is still a need for more intuitive and quantitative analysis to interactively explore high-dimensional data and improve interpretability. We propose a metric and a generalized algorithm-agnostic approach based on the concept of capacity to evaluate and analyze the DR results. Based on our approach, we develop a visual analytic system HiLow for exploring high-dimensional data and projections. We also propose a mixed-initiative recommendation algorithm that assists users in interactively DR results manipulation. Users can compare the differences in data distribution after the interaction through HiLow. Furthermore, we propose a novel visualization design focusing on quantitative analysis of differences between high and low-dimensional data distributions. Finally, through user study and case studies, we validate the effectiveness of our approach and system in enhancing the interpretability of projections and analyzing the distribution of high and low-dimensional data.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Siming Chen"],"doi":"10.1109/TVCG.2023.3324851","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233324851","time_end":"","time_stamp":"","time_start":"","title":"Interpreting High-Dimensional Projections With Capacity","uid":"v-tvcg-20233324851","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233326698":{"abstract":"Researchers have derived many theoretical models for specifying users\u2019 insights as they interact with a visualization system. These representations are essential for understanding the insight discovery process, such as when inferring user interaction patterns that lead to insight or assessing the rigor of reported insights. However, theoretical models can be difficult to apply to existing tools and user studies, often due to discrepancies in how insight and its constituent parts are defined. This paper calls attention to the consistent structures that recur across the visualization literature and describes how they connect multiple theoretical representations of insight. We synthesize a unified formalism for insights using these structures, enabling a wider audience of researchers and developers to adopt the corresponding models. Through a series of theoretical case studies, we use our formalism to compare and contrast existing theories, revealing interesting research challenges in reasoning about a user's domain knowledge and leveraging synergistic approaches in data mining and data management research.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leilani Battle"],"doi":"10.1109/TVCG.2023.3326698","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233326698","time_end":"","time_stamp":"","time_start":"","title":"What Do We Mean When We Say \u201cInsight\u201d? A Formal Synthesis of Existing Theory","uid":"v-tvcg-20233326698","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233330262":{"abstract":"This paper presents a computational framework for the concise encoding of an ensemble of persistence diagrams, in the form of weighted Wasserstein barycenters [100], [102] of a dictionary of atom diagrams. We introduce a multi-scale gradient descent approach for the efficient resolution of the corresponding minimization problem, which interleaves the optimization of the barycenter weights with the optimization of the atom diagrams. Our approach leverages the analytic expressions for the gradient of both sub-problems to ensure fast iterations and it additionally exploits shared-memory parallelism. Extensive experiments on public ensembles demonstrate the efficiency of our approach, with Wasserstein dictionary computations in the orders of minutes for the largest examples. We show the utility of our contributions in two applications. First, we apply Wassserstein dictionaries to data reduction and reliably compress persistence diagrams by concisely representing them with their weights in the dictionary. Second, we present a dimensionality reduction framework based on a Wasserstein dictionary defined with a small number of atoms (typically three) and encode the dictionary as a low dimensional simplex embedded in a visual space (typically in 2D). In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Julien Tierny"],"doi":"10.1109/TVCG.2023.3330262","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Topological data analysis, ensemble data, persistence diagrams"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233330262","time_end":"","time_stamp":"","time_start":"","title":"Wasserstein Dictionaries of Persistence Diagrams","uid":"v-tvcg-20233330262","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233332511":{"abstract":"We present Submerse, an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Saeed Boorboor"],"doi":"10.1109/TVCG.2023.3332511","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Camera navigation, flooding simulation visualization, immersive visualization, mixed reality"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233332511","time_end":"","time_stamp":"","time_start":"","title":"Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies","uid":"v-tvcg-20233332511","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233332999":{"abstract":"Quantum computing offers significant speedup compared to classical computing, which has led to a growing interest among users in learning and applying quantum computing across various applications. However, quantum circuits, which are fundamental for implementing quantum algorithms, can be challenging for users to understand due to their underlying logic, such as the temporal evolution of quantum states and the effect of quantum amplitudes on the probability of basis quantum states. To fill this research gap, we propose QuantumEyes, an interactive visual analytics system to enhance the interpretability of quantum circuits through both global and local levels. For the global-level analysis, we present three coupled visualizations to delineate the changes of quantum states and the underlying reasons: a Probability Summary View to overview the probability evolution of quantum states; a State Evolution View to enable an in-depth analysis of the influence of quantum gates on the quantum states; a Gate Explanation View to show the individual qubit states and facilitate a better understanding of the effect of quantum gates. For the local-level analysis, we design a novel geometrical visualization dandelion chart to explicitly reveal how the quantum amplitudes affect the probability of the quantum state. We thoroughly evaluated QuantumEyes as well as the novel dandelion chart integrated into it through two case studies on different types of quantum algorithms and in-depth expert interviews with 12 domain experts. The results demonstrate the effectiveness and usability of our approach in enhancing the interpretability of quantum circuits.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shaolun Ruan"],"doi":"10.1109/TVCG.2023.3332999","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, design study, interpretability, quantum computing."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233332999","time_end":"","time_stamp":"","time_start":"","title":"QuantumEyes: Towards Better Interpretability of Quantum Circuits","uid":"v-tvcg-20233332999","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233333356":{"abstract":"As urban populations grow, effectively accessing urban performance measures such as livability and comfort becomes increasingly important due to their significant socioeconomic impacts. While Point of Interest (POI) data has been utilized for various applications in location-based services, its potential for urban performance analytics remains unexplored. In this paper, we present SenseMap, a novel approach for analyzing urban performance by leveraging POI data as a semantic representation of urban functions. We quantify the contribution of POIs to different urban performance measures by calculating semantic textual similarities on our constructed corpus. We propose Semantic-adaptive Kernel Density Estimation which takes into account POIs\u2019 in\ufb02uential areas across different Traf\ufb01c Analysis Zones and semantic contributions to generate semantic density maps for measures. We design and implement a feature-rich, real-time visual analytics system for users to explore the urban performance of their surroundings. Evaluations with human judgment and reference data demonstrate the feasibility and validity of our method. Usage scenarios and user studies demonstrate the capability, usability, and explainability of our system.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Juntong Chen"],"doi":"10.1109/TVCG.2023.3333356","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Urban data, semantic textual similarity, point of interest, density map, visual analytics, visualization design"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233333356","time_end":"","time_stamp":"","time_start":"","title":"SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity","uid":"v-tvcg-20233333356","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233334513":{"abstract":"Data integration is often performed to consolidate information from multiple disparate data sources during visual data analysis. However, integration operations are usually separate from visual analytics operations such as encode and filter in both interface design and empirical research. We conducted a preliminary user study to investigate whether and how data integration should be incorporated directly into the visual analytics process. We used two interface alternatives featuring contrasting approaches to the data preparation and analysis workflow: manual file-based ex-situ integration as a separate step from visual analytics operations; and automatic UI-based in-situ integration merged with visual analytics operations. Participants were asked to complete specific and free-form tasks with each interface, browsing for patterns, generating insights, and summarizing relationships between attributes distributed across multiple files. Analyzing participants' interactions and feedback, we found both task completion time and total interactions to be similar across interfaces and tasks, as well as unique integration strategies between interfaces and emergent behaviors related to satisficing and cognitive bias. Participants' time spent and interactions revealed that in-situ integration enabled users to spend more time on analysis tasks compared with ex-situ integration. Participants' integration strategies and analytical behaviors revealed differences in interface usage for generating and tracking hypotheses and insights. With these results, we synthesized preliminary guidelines for designing future visual analytics interfaces that can support integrating attributes throughout an active analysis process.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Adam Coscia"],"doi":"10.1109/TVCG.2023.3334513","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, Data integration, User interface design, Integration strategies, Analytical behaviors."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233334513","time_end":"","time_stamp":"","time_start":"","title":"Preliminary Guidelines For Combining Data Integration and Visual Data Analysis","uid":"v-tvcg-20233334513","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233334755":{"abstract":"This paper presents a computational framework for the Wasserstein auto-encoding of merge trees (MT-WAE), a novel extension of the classical auto-encoder neural network architecture to the Wasserstein metric space of merge trees. In contrast to traditional auto-encoders which operate on vectorized data, our formulation explicitly manipulates merge trees on their associated metric space at each layer of the network, resulting in superior accuracy and interpretability. Our novel neural network approach can be interpreted as a non-linear generalization of previous linear attempts [79] at merge tree encoding. It also trivially extends to persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our algorithms, with MT-WAE computations in the orders of minutes on average. We show the utility of our contributions in two applications adapted from previous work on merge tree encoding [79]. First, we apply MT-WAE to merge tree compression, by concisely representing them with their coordinates in the final layer of our auto-encoder. Second, we document an application to dimensionality reduction, by exploiting the latent space of our auto-encoder, for the visual analysis of ensemble data. We illustrate the versatility of our framework by introducing two penalty terms, to help preserve in the latent space both the Wasserstein distances between merge trees, as well as their clusters. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used for reproducibility.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Julien Tierny"],"doi":"10.1109/TVCG.2023.3334755","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Topological data analysis, ensemble data, persistence diagrams, merge trees, auto-encoders, neural networks"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233334755","time_end":"","time_stamp":"","time_start":"","title":"Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)","uid":"v-tvcg-20233334755","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233336588":{"abstract":"This article explores how the ability to recall information in data visualizations depends on the presentation technology. Participants viewed 10 Isotype visualizations on a 2D screen, in 3D, in Virtual Reality (VR) and in Mixed Reality (MR). To provide a fair comparison between the three 3D conditions, we used LIDAR to capture the details of the physical rooms, and used this information to create our textured 3D models. For all environments, we measured the number of visualizations recalled and their order (2D) or spatial location (3D, VR, MR). We also measured the number of syntactic and semantic features recalled. Results of our study show increased recall and greater richness of data understanding in the MR condition. Not only did participants recall more visualizations and ordinal/spatial positions in MR, but they also remembered more details about graph axes and data mappings, and more information about the shape of the data. We discuss how differences in the spatial and kinesthetic cues provided in these different environments could contribute to these results, and reasons why we did not observe comparable performance in the 3D and VR conditions.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Christophe Hurter"],"doi":"10.1109/TVCG.2023.3336588","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Three-dimensional displays, Virtual reality, Mixed reality, Electronic mail, Syntactics, Semantics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233336588","time_end":"","time_stamp":"","time_start":"","title":"Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D","uid":"v-tvcg-20233336588","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233337173":{"abstract":"Visualization design studies bring together visualization researchers and domain experts to address yet unsolved data analysis challenges stemming from the needs of the domain experts. Typically, the visualization researchers lead the design study process and implementation of any visualization solutions. This setup leverages the visualization researchers' knowledge of methodology, design, and programming, but the availability to synchronize with the domain experts can hamper the design process. We consider an alternative setup where the domain experts take the lead in the design study, supported by the visualization experts. In this study, the domain experts are computer architecture experts who simulate and analyze novel computer chip designs. These chips rely on a Network-on-Chip (NOC) to connect components. The experts want to understand how the chip designs perform and what in the design led to their performance. To aid this analysis, we develop Vis4Mesh, a visualization system that provides spatial, temporal, and architectural context to simulated NOC behavior. Integration with an existing computer architecture visualization tool enables architects to perform deep-dives into specific architecture component behavior. We validate Vis4Mesh through a case study and a user study with computer architecture researchers. We reflect on our design and process, discussing advantages, disadvantages, and guidance for engaging in a domain expert-led design studies.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yifan Sun"],"doi":"10.1109/TVCG.2023.3337173","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Design Study, Network-on-Chip, Performance Analysis"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233337173","time_end":"","time_stamp":"","time_start":"","title":"Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study","uid":"v-tvcg-20233337173","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233337396":{"abstract":"Partitioning a dynamic network into subsets (i.e., snapshots) based on disjoint time intervals is a widely used technique for understanding how structural patterns of the network evolve. However, selecting an appropriate time window (i.e., slicing a dynamic network into snapshots) is challenging and time-consuming, often involving a trial-and-error approach to investigating underlying structural patterns. To address this challenge, we present MoNetExplorer, a novel interactive visual analytics system that leverages temporal network motifs to provide recommendations for window sizes and support users in visually comparing different slicing results. MoNetExplorer provides a comprehensive analysis based on window size, including (1) a temporal overview to identify the structural information, (2) temporal network motif composition, and (3) node-link-diagram-based details to enable users to identify and understand structural patterns at various temporal resolutions. To demonstrate the effectiveness of our system, we conducted a case study with network researchers using two real-world dynamic network datasets. Our case studies show that the system effectively supports users to gain valuable insights into the temporal and structural aspects of dynamic networks.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Seokweon Jung"],"doi":"10.1109/TVCG.2023.3337396","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, Measurement, Size measurement, Windows, Time measurement, Data visualization, Task analysis, Visual analytics, Dynamic networks, Temporal network motifs, Interactive network slicing"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233337396","time_end":"","time_stamp":"","time_start":"","title":"A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs","uid":"v-tvcg-20233337396","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233337642":{"abstract":"Molecular docking is a key technique in various fields like structural biology, medicinal chemistry, and biotechnology. It is widely used for virtual screening during drug discovery, computer-assisted drug design, and protein engineering. A general molecular docking process consists of the target and ligand selection, their preparation, and the docking process itself, followed by the evaluation of the results. However, the most commonly used docking software provides no or very basic evaluation possibilities. Scripting and external molecular viewers are often used, which are not designed for an efficient analysis of docking results. Therefore, we developed InVADo, a comprehensive interactive visual analysis tool for large docking data. It consists of multiple linked 2D and 3D views. It filters and spatially clusters the data, and enriches it with post-docking analysis results of protein-ligand interactions and functional groups, to enable well-founded decision-making. In an exemplary case study, domain experts confirmed that InVADo facilitates and accelerates the analysis workflow. They rated it as a convenient, comprehensive, and feature-rich tool, especially useful for virtual screening.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Krone"],"doi":"10.1109/TVCG.2023.3337642","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Molecular Docking, AutoDock, Virtual Screening, Visual Analysis, Visualization, Clustering, Protein-Ligand Interaction."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233337642","time_end":"","time_stamp":"","time_start":"","title":"InVADo: Interactive Visual Analysis of Molecular Docking Data","uid":"v-tvcg-20233337642","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233338451":{"abstract":"This paper investigates the role of text in visualizations, specifically the impact of text position, semantic content, and biased wording. Two empirical studies were conducted based on two tasks (predicting data trends and appraising bias) using two visualization types (bar and line charts). While the addition of text had a minimal effect on how people perceive data trends, there was a significant impact on how biased they perceive the authors to be. This finding revealed a relationship between the degree of bias in textual information and the perception of the authors' bias. Exploratory analyses support an interaction between a person's prediction and the degree of bias they perceived. This paper also develops a crowdsourced method for creating chart annotations that range from neutral to highly biased. This research highlights the need for designers to mitigate potential polarization of readers' opinions based on how authors' ideas are expressed.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chase Stokes"],"doi":"10.1109/TVCG.2023.3338451","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, text, annotation, perceived bias, judgment, prediction"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233338451","time_end":"","time_stamp":"","time_start":"","title":"The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions","uid":"v-tvcg-20233338451","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233340770":{"abstract":"We present VoxAR, a method to facilitate an effective visualization of volume-rendered objects in optical see-through head-mounted displays (OST-HMDs). The potential of augmented reality (AR) to integrate digital information into the physical world provides new opportunities for visualizing and interpreting scientific data. However, a limitation of OST-HMD technology is that rendered pixels of a virtual object can interfere with the colors of the real-world, making it challenging to perceive the augmented virtual information accurately. We address this challenge in a two-step approach. First, VoxAR determines an appropriate placement of the volume-rendered object in the real-world scene by evaluating a set of spatial and environmental objectives, managed as user-selected preferences and pre-defined constraints. We achieve a real-time solution by implementing the objectives using a GPU shader language. Next, VoxAR adjusts the colors of the input transfer function (TF) based on the real-world placement region. Specifically, we introduce a novel optimization method that adjusts the TF colors such that the resulting volume-rendered pixels are discernible against the background and the TF maintains the perceptual mapping between the colors and data intensity values. Finally, we present an assessment of our approach through objective evaluations and subjective user studies.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Saeed Boorboor"],"doi":"10.1109/TVCG.2023.3340770","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Adaptive Visualization, Situated Visualization, Augmented Reality, Volume Rendering"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233340770","time_end":"","time_stamp":"","time_start":"","title":"VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality","uid":"v-tvcg-20233340770","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233341990":{"abstract":"We report on challenges and considerations for supporting design processes for visualizations in motion embedded in sports videos. We derive our insights from analyzing swimming race visualizations and motion-related data, building a technology probe, as well as a study with designers. Understanding how to design situated visualizations in motion is important for a variety of contexts. Competitive sports coverage, in particular, increasingly includes information on athlete or team statistics and records. Although moving visual representations attached to athletes or other targets are starting to appear, systematic investigations on how to best support their design process in the context of sports videos are still missing. Our work makes several contributions in identifying opportunities for visualizations to be added to swimming competition coverage but, most importantly, in identifying requirements and challenges for designing situated visualizations in motion. Our investigations include the analysis of a survey with swimming enthusiasts on their motion-related information needs, an ideation workshop to collect designs and elicit design challenges, the design of a technology probe that allows to create embedded visualizations in motion based on real data, and an evaluation with visualization designers that aimed to understand the benefits of designing directly on videos.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lijie Yao"],"doi":"10.1109/TVCG.2023.3341990","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Sports, Videos, Probes, Surveys, Authoring systems, Games, Design framework, Embedded visualization, Sports analytics, Visualization in motion"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233341990","time_end":"","time_stamp":"","time_start":"","title":"Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos","uid":"v-tvcg-20233341990","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233345340":{"abstract":"Label quality issues, such as noisy labels and imbalanced class distributions, have negative effects on model performance. Automatic reweighting methods identify problematic samples with label quality issues by recognizing their negative effects on validation samples and assigning lower weights to them. However, these methods fail to achieve satisfactory performance when the validation samples are of low quality. To tackle this, we develop Reweighter, a visual analysis tool for sample reweighting. The reweighting relationships between validation samples and training samples are modeled as a bipartite graph. Based on this graph, a validation sample improvement method is developed to improve the quality of validation samples. Since the automatic improvement may not always be perfect, a co-cluster-based bipartite graph visualization is developed to illustrate the reweighting relationships and support the interactive adjustments to validation samples and reweighting results. The adjustments are converted into the constraints of the validation sample improvement method to further improve validation samples. We demonstrate the effectiveness of Reweighter in improving reweighting results through quantitative evaluation and two case studies.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Weikai Yang"],"doi":"10.1109/TVCG.2023.3345340","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233345340","time_end":"","time_stamp":"","time_start":"","title":"Interactive Reweighting for Mitigating Label Quality Issues","uid":"v-tvcg-20233345340","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233345373":{"abstract":"Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for compressing large-scale time-varying data. The approach consists of two stages: spatial compression and model aggregation. In the first stage, each time step is compressed using an implicit neural representation with bottleneck layers and features of interest preservation-based sampling. In the second stage, we utilize an offline knowledge distillation algorithm to extract knowledge from the trained models and aggregate it into a single model. We evaluated our approach on a variety of time-varying volumetric data sets. Both quantitative and qualitative results, such as PSNR, LPIPS, and rendered images, demonstrate that KD-INR surpasses the state-of-the-art approaches, including learning-based (i.e., CoordNet, NeurComp, and SIREN) and lossy compression (i.e., SZ3, ZFP, and TTHRESH) methods, at various compression ratios ranging from hundreds to ten thousand.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Han Jun"],"doi":"10.1109/TVCG.2023.3345373","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Time-varying data compression, implicit neural representation, knowledge distillation, volume visualization."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233345373","time_end":"","time_stamp":"","time_start":"","title":"KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation","uid":"v-tvcg-20233345373","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233346640":{"abstract":"Is it true that if citizens understand hurricane probabilities, they will make more rational decisions for evacuation? Finding answers to such questions is not straightforward in the literature because the terms \u201c judgment \u201d and \u201c decision making \u201d are often used interchangeably. This terminology conflation leads to a lack of clarity on whether people make suboptimal decisions because of inaccurate judgments of information conveyed in visualizations or because they use alternative yet currently unknown heuristics. To decouple judgment from decision making, we review relevant concepts from the literature and present two preregistered experiments (N=601) to investigate if the task (judgment vs. decision making), the scenario (sports vs. humanitarian), and the visualization (quantile dotplots, density plots, probability bars) affect accuracy. While experiment 1 was inconclusive, we found evidence for a difference in experiment 2. Contrary to our expectations and previous research, which found decisions less accurate than their direct-equivalent judgments, our results pointed in the opposite direction. Our findings further revealed that decisions were less vulnerable to status-quo bias, suggesting decision makers may disfavor responses associated with inaction. We also found that both scenario and visualization types can influence people's judgments and decisions. Although effect sizes are not large and results should be interpreted carefully, we conclude that judgments cannot be safely used as proxy tasks for decision making, and discuss implications for visualization research and beyond. Materials and preregistrations are available at https://osf.io/ufzp5/?view_only=adc0f78a23804c31bf7fdd9385cb264f.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ba\u015fak Oral"],"doi":"10.1109/TVCG.2023.3346640","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Task analysis, Decision making, Visualization, Bars, Sports, Terminology, Cognition, Decision Making, Judgment, Psychology, Visualization"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233346640","time_end":"","time_stamp":"","time_start":"","title":"Decoupling Judgment and Decision Making: A Tale of Two Tails","uid":"v-tvcg-20233346640","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233346641":{"abstract":"Currently, growing data sources and long-running algorithms impede user attention and interaction with visual analytics applications. Progressive visualization (PV) and visual analytics (PVA) alleviate this problem by allowing immediate feedback and interaction with large datasets and complex computations, avoiding waiting for complete results by using partial results improving with time. Yet, creating a progressive visualization requires more effort than a regular visualization but also opens up new possibilities, such as steering the computations towards more relevant parts of the data, thus saving computational resources. However, there is currently no comprehensive overview of the design space for progressive visualization systems. We surveyed the related work of PV and derived a new taxonomy for progressive visualizations by systematically categorizing all PV publications that included visualizations with progressive features. Progressive visualizations can be categorized by well-known visualization taxonomies, but we also found that progressive visualizations can be distinguished by the way they manage their data processing, data domain, and visual update. Furthermore, we identified key properties such as uncertainty, steering, visual stability, and real-time processing that are significantly different with progressive applications. We also collected evaluation methodologies reported by the publications and conclude with statistical findings, research gaps, and open challenges. A continuously updated visual browser of the survey data is available at visualsurvey.net/pva.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alex Ulmer"],"doi":"10.1109/TVCG.2023.3346641","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Convergence, Visual analytics, Taxonomy Surveys, Rendering (computer graphics), Task analysis, Progressive Visual Analytics, Progressive Visualization, Taxonomy, State-of-the-Art Report, Survey"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233346641","time_end":"","time_stamp":"","time_start":"","title":"A Survey on Progressive Visualization","uid":"v-tvcg-20233346641","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233346713":{"abstract":"Recent growth in the popularity of large language models has led to their increased usage for summarizing, predicting, and generating text, making it vital to help researchers and engineers understand how and why they work. We present KnowledgeVIS , a human-in-the-loop visual analytics system for interpreting language models using fill-in-the-blank sentences as prompts. By comparing predictions between sentences, KnowledgeVIS reveals learned associations that intuitively connect what language models learn during training to natural language tasks downstream, helping users create and test multiple prompt variations, analyze predicted words using a novel semantic clustering technique, and discover insights using interactive visualizations. Collectively, these visualizations help users identify the likelihood and uniqueness of individual predictions, compare sets of predictions between prompts, and summarize patterns and relationships between predictions across all prompts. We demonstrate the capabilities of KnowledgeVIS with feedback from six NLP experts as well as three different use cases: (1) probing biomedical knowledge in two domain-adapted models; and (2) evaluating harmful identity stereotypes and (3) discovering facts and relationships between three general-purpose models.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Adam Coscia"],"doi":"10.1109/TVCG.2023.3346713","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, language models, prompting, interpretability, machine learning."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233346713","time_end":"","time_stamp":"","time_start":"","title":"KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts","uid":"v-tvcg-20233346713","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243350076":{"abstract":"Ensembles of contours arise in various applications like simulation, computer-aided design, and semantic segmentation. Uncovering ensemble patterns and analyzing individual members is a challenging task that suffers from clutter. Ensemble statistical summarization can alleviate this issue by permitting analyzing ensembles' distributional components like the mean and median, confidence intervals, and outliers. Contour boxplots, powered by Contour Band Depth (CBD), are a popular non-parametric ensemble summarization method that benefits from CBD's generality, robustness, and theoretical properties. In this work, we introduce Inclusion Depth (ID), a new notion of contour depth with three defining characteristics. First, ID is a generalization of functional Half-Region Depth, which offers several theoretical guarantees. Second, ID relies on a simple principle: the inside/outside relationships between contours. This facilitates implementing ID and understanding its results. Third, the computational complexity of ID scales quadratically in the number of members of the ensemble, improving CBD's cubic complexity. This also in practice speeds up the computation enabling the use of ID for exploring large contour ensembles or in contexts requiring multiple depth evaluations like clustering. In a series of experiments on synthetic data and case studies with meteorological and segmentation data, we evaluate ID's performance and demonstrate its capabilities for the visual analysis of contour ensembles.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicol\u00e1s Ch\u00e1ves"],"doi":"10.1109/TVCG.2024.3350076","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Uncertainty visualization, contours, ensemble summarization, depth statistics."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243350076","time_end":"","time_stamp":"","time_start":"","title":"Inclusion Depth for Contour Ensembles","uid":"v-tvcg-20243350076","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243354561":{"abstract":"Interactive visualization can support fluid exploration but is often limited to predetermined tasks. Scripting can support a vast range of queries but may be more cumbersome for free-form exploration. Embedding interactive visualization in scripting environments, such as computational notebooks, provides an opportunity to leverage the strengths of both direct manipulation and scripting. We investigate interactive visualization design methodology, choices, and strategies under this paradigm through a design study of calling context trees used in performance analysis, a field which exemplifies typical exploratory data analysis workflows with big data and hard to define problems. We first produce a formal task analysis assigning tasks to graphical or scripting contexts based on their specificity, frequency, and suitability. We then design a notebook-embedded interactive visualization and validate it with intended users. In a follow-up study, we present participants with multiple graphical and scripting interaction modes to elicit feedback about notebook-embedded visualization design, finding consensus in support of the interaction model. We report and reflect on observations regarding the process and design implications for combining visualization and scripting in notebooks.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Connor Scully-Allison"],"doi":"10.1109/TVCG.2024.3354561","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Exploratory Data Analysis, Interactive Data Analysis, Computational Notebooks, Hybrid Visualization-Scripting, Visualization Design"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243354561","time_end":"","time_stamp":"","time_start":"","title":"Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments","uid":"v-tvcg-20243354561","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243355884":{"abstract":"News articles containing data visualizations play an important role in informing the public on issues ranging from public health to politics. Recent research on the persuasive appeal of data visualizations suggests that prior attitudes can be notoriously difficult to change. Inspired by an NYT article, we designed two experiments to evaluate the impact of elicitation and contrasting narratives on attitude change, recall, and engagement. We hypothesized that eliciting prior beliefs leads to more elaborative thinking that ultimately results in higher attitude change, better recall, and engagement. Our findings revealed that visual elicitation leads to higher engagement in terms of feelings of surprise. While there is an overall attitude change across all experiment conditions, we did not observe a significant effect of belief elicitation on attitude change. With regard to recall error, while participants in the draw trend elicitation exhibited significantly lower recall error than participants in the categorize trend condition, we found no significant difference in recall error when comparing elicitation conditions to no elicitation. In a follow-up study, we added contrasting narratives with the purpose of making the main visualization (communicating data on the focal issue) appear strikingly different. Compared to the results of Study 1, we found that contrasting narratives improved engagement in terms of surprise and interest but interestingly resulted in higher recall error and no significant change in attitude. We discuss the effects of elicitation and contrasting narratives in the context of topic involvement and the strengths of temporal trends encoded in the data visualization.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Milad Rogha"],"doi":"10.1109/TVCG.2024.3355884","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Market Research, Visualization, Uncertainty, Data Models, Correlation, Attitude Control, Belief Elicitation, Visual Elicitation, Data Visualization, Contrasting Narratives"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243355884","time_end":"","time_stamp":"","time_start":"","title":"The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization","uid":"v-tvcg-20243355884","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243356566":{"abstract":"The increasing ubiquity of data in everyday life has elevated the importance of data literacy and accessible data representations, particularly for individuals with disabilities. While prior research predominantly focuses on the needs of the visually impaired, our survey aims to broaden this scope by investigating accessible data representations across a more inclusive spectrum of disabilities. After conducting a systematic review of 152 accessible data representation papers from ACM and IEEE databases, we found that roughly 78% of existing articles center on vision impairments. In this paper, we conduct a comprehensive review of the remaining 22% of papers focused on underrepresented disability communities. We developed categorical dimensions based on accessibility, visualization, and human-computer interaction to classify the papers. These dimensions include the community of focus, issues addressed, contribution type, study methods, participants, data type, visualization type, and data domain. Our work redefines accessible data representations by illustrating their application for disabilities beyond those related to vision. Building on our literature review, we identify and discuss opportunities for future research in accessible data representations. All supplemental materials are available at https://osf.io/ yv4xm/?view only=7b36a3fbf7a14b3888029966faa3def9.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Brianna Wimer"],"doi":"10.1109/TVCG.2024.3356566","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Accessibility, Data Representations."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243356566","time_end":"","time_stamp":"","time_start":"","title":"Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations","uid":"v-tvcg-20243356566","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243358919":{"abstract":"We conduct two in-lab experiments (N=93) to evaluate the effectiveness of Gantt charts, extended Gantt charts, and stringline charts for visualizing fixed-order event sequence data. We first formulate five types of event sequences and define three types of sequence elements: point events, interval events, and the temporal gaps between them. Our two experiments focus on event sequences with a pre-defined, fixed order, and measure task error rates and completion time. The first experiment shows single sequences and assesses the three charts' performance in comparing event duration or gap. The second experiment shows multiple sequences and evaluates how well the charts reveal temporal patterns. The results suggest that when visualizing single fixed-order event sequences, 1) Gantt and extended Gantt charts lead to comparable error rates in the duration-comparing task; 2) Gantt charts exhibit either shorter or equal completion time than extended Gantt charts; 3) both Gantt and extended Gantt charts demonstrate shorter completion times than stringline charts; 4) however, stringline charts outperform the other two charts with fewer errors in the comparing task when event type counts are high. Additionally, when visualizing multiple point-based fixed-order event sequences, stringline charts require less time than Gantt charts for people to find temporal patterns. Based on these findings, we discuss design opportunities for visualizing fixed-order event sequences and discuss future avenues for optimizing these charts.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Junxiu Tang"],"doi":"10.1109/TVCG.2024.3358919","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Gantt chart, stringline chart, Marey's graph, event sequence, empirical study"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243358919","time_end":"","time_stamp":"","time_start":"","title":"A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts","uid":"v-tvcg-20243358919","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243364388":{"abstract":"Seasonal-trend decomposition based on loess (STL) is a powerful tool to explore time series data visually. In this paper, we present an extension of STL to uncertain data, named uncertainty-aware STL (UASTL). Our method propagates multivariate Gaussian distributions mathematically exactly through the entire analysis and visualization pipeline. Thereby, stochastic quantities shared between the components of the decomposition are preserved. Moreover, we present application scenarios with uncertainty modeling based on Gaussian processes, e.g., data with uncertain areas or missing values. Besides these mathematical results and modeling aspects, we introduce visualization techniques that address the challenges of uncertainty visualization and the problem of visualizing highly correlated components of a decomposition. The global uncertainty propagation enables the time series visualization with STL-consistent samples, the exploration of correlation between and within decomposition's components, and the analysis of the impact of varying uncertainty. Finally, we show the usefulness of UASTL and the importance of uncertainty visualization with several examples. Thereby, a comparison with conventional STL is performed.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tim Krake"],"doi":"10.1109/TVCG.2024.3364388","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["- I.6.9.g Visualization techniques and methodologies < I.6.9 Visualization < I.6 Simulation, Modeling, and Visualization < I Compu - G.3 Probability and Statistics < G Mathematics of Computing - G.3.n Statistical computing < G.3 Probability and Statistics < G Mathematics of Computing - G.3.p Stochastic processes < G.3 Probability and Statistics < G Mathematics of Computing"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243364388","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess","uid":"v-tvcg-20243364388","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243364841":{"abstract":"The need to understand the structure of hierarchical or high-dimensional data is present in a variety of fields. Hyperbolic spaces have proven to be an important tool for embedding computations and analysis tasks as their non-linear nature lends itself well to tree or graph data. Subsequently, they have also been used in the visualization of high-dimensional data, where they exhibit increased embedding performance. However, none of the existing dimensionality reduction methods for embedding into hyperbolic spaces scale well with the size of the input data. That is because the embeddings are computed via iterative optimization schemes and the computation cost of every iteration is quadratic in the size of the input. Furthermore, due to the non-linear nature of hyperbolic spaces, Euclidean acceleration structures cannot directly be translated to the hyperbolic setting. This paper introduces the first acceleration structure for hyperbolic embeddings, building upon a polar quadtree. We compare our approach with existing methods and demonstrate that it computes embeddings of similar quality in significantly less time. Implementation and scripts for the experiments can be found at this https URL.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Martin Skrodzki"],"doi":"10.1109/TVCG.2024.3364841","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Quantitative Methods (q-bio.QM); Machine Learning (stat.ML) Dimensionality reduction, t-SNE, hyperbolic embedding, acceleration structure"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243364841","time_end":"","time_stamp":"","time_start":"","title":"Accelerating hyperbolic t-SNE","uid":"v-tvcg-20243364841","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243365089":{"abstract":"Implicit Neural representations (INRs) are widely used for scientific data reduction and visualization by modeling the function that maps a spatial location to a data value. Without any prior knowledge about the spatial distribution of values, we are forced to sample densely from INRs to perform visualization tasks like iso-surface extraction which can be very computationally expensive. Recently, range analysis has shown promising results in improving the efficiency of geometric queries, such as ray casting and hierarchical mesh extraction, on INRs for 3D geometries by using arithmetic rules to bound the output range of the network within a spatial region. However, the analysis bounds are often too conservative for complex scientific data. In this paper, we present an improved technique for range analysis by revisiting the arithmetic rules and analyzing the probability distribution of the network output within a spatial region. We model this distribution efficiently as a Gaussian distribution by applying the central limit theorem. Excluding low probability values, we are able to tighten the output bounds, resulting in a more accurate estimation of the value range, and hence more accurate identification of iso-surface cells and more efficient iso-surface extraction on INRs. Our approach demonstrates superior performance in terms of the iso-surface extraction time on four datasets compared to the original range analysis method and can also be generalized to other geometric query tasks.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haoyu Li"],"doi":"10.1109/TVCG.2024.3365089","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Iso-surface extraction, implicit neural representation, uncertainty propagation, affine arithmetic."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243365089","time_end":"","time_stamp":"","time_start":"","title":"Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation","uid":"v-tvcg-20243365089","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243368060":{"abstract":"Visual analytics supports data analysis tasks within complex domain problems. However, due to the richness of data types, visual designs, and interaction designs, users need to recall and process a significant amount of information when they visually analyze data. These challenges emphasize the need for more intelligent visual analytics methods. Large language models have demonstrated the ability to interpret various forms of textual data, offering the potential to facilitate intelligent support for visual analytics. We propose LEVA, a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. To support onboarding, we use large language models to interpret visualization designs and view relationships based on system specifications. For exploration, we use large language models to recommend insights based on the analysis of system status and data to facilitate mixed-initiative exploration. For summarization, we present a selective reporting strategy to retrace analysis history through a stream visualization and generate insight reports with the help of large language models. We demonstrate how LEVA can be integrated into existing visual analytics systems. Two usage scenarios and a user study suggest that LEVA effectively aids users in conducting visual analytics.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuheng Zhao"],"doi":"10.1109/TVCG.2024.3368060","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Insight recommendation, mixed-initiative, interface agent, large language models, visual analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243368060","time_end":"","time_stamp":"","time_start":"","title":"LEVA: Using Large Language Models to Enhance Visual Analytics","uid":"v-tvcg-20243368060","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243368621":{"abstract":"The use of natural language interfaces (NLIs) to create charts is becoming increasingly popular due to the intuitiveness of natural language interactions. One key challenge in this approach is to accurately capture user intents and transform them to proper chart specifications. This obstructs the wide use of NLI in chart generation, as users' natural language inputs are generally abstract (i.e., ambiguous or under-specified), without a clear specification of visual encodings. Recently, pre-trained large language models (LLMs) have exhibited superior performance in understanding and generating natural language, demonstrating great potential for downstream tasks. Inspired by this major trend, we propose ChartGPT, generating charts from abstract natural language inputs. However, LLMs are struggling to address complex logic problems. To enable the model to accurately specify the complex parameters and perform operations in chart generation, we decompose the generation process into a step-by-step reasoning pipeline, so that the model only needs to reason a single and specific sub-task during each run. Moreover, LLMs are pre-trained on general datasets, which might be biased for the task of chart generation. To provide adequate visualization knowledge, we create a dataset consisting of abstract utterances and charts and improve model performance through fine-tuning. We further design an interactive interface for ChartGPT that allows users to check and modify the intermediate outputs of each step. The effectiveness of the proposed system is evaluated through quantitative evaluations and a user study.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuan Tian"],"doi":"10.1109/TVCG.2024.3368621","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Natural language interfaces, large language models, data visualization"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243368621","time_end":"","time_stamp":"","time_start":"","title":"ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language","uid":"v-tvcg-20243368621","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243372104":{"abstract":"With the rise of short-form video platforms and the increasing availability of data, we see the potential for people to share short-form videos embedded with data in situ (e.g., daily steps when running) to increase the credibility and expressiveness of their stories. However, creating and sharing such videos in situ is challenging since it involves multiple steps and skills (e.g., data visualization creation and video editing), especially for amateurs. By conducting a formative study (N=10) using three design probes, we collected the motivations and design requirements. We then built VisTellAR, a mobile AR authoring tool, to help amateur video creators embed data visualizations in short-form videos in situ. A two-day user study shows that participants (N=12) successfully created various videos with data visualizations in situ and they confirmed the ease of use and learning. AR pre-stage authoring was useful to assist people in setting up data visualizations in reality with more designs in camera movements and interaction with gestures and physical objects to storytelling.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Wai Tong"],"doi":"10.1109/TVCG.2024.3372104","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Personal data, augmented reality, data visualization, storytelling, short-form video"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243372104","time_end":"","time_stamp":"","time_start":"","title":"VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality","uid":"v-tvcg-20243372104","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243372620":{"abstract":"Small multiples are a popular visualization method, displaying different views of a dataset using multiple frames, often with the same scale and axes. However, there is a need to address their potential constraints, especially in the context of human cognitive capacity limits. These limits dictate the maximum information our mind can process at once. We explore the issue of capacity limitation by testing competing theories that describe how the number of frames shown in a display, the scale of the frames, and time constraints impact user performance with small multiples of line charts in an energy grid scenario. In two online studies (Experiment 1 n = 141 and Experiment 2 n = 360) and a follow-up eye-tracking analysis (n=5),we found a linear decline in accuracy with increasing frames across seven tasks, which was not fully explained by differences in frame size, suggesting visual search challenges. Moreover, the studies demonstrate that highlighting specific frames can mitigate some visual search difficulties but, surprisingly, not eliminate them. This research offers insights into optimizing the utility of small multiples by aligning them with human limitations.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Helia Hosseinpour"],"doi":"10.1109/TVCG.2024.3372620","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Cognition, small multiples, time-series data"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243372620","time_end":"","time_stamp":"","time_start":"","title":"Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs","uid":"v-tvcg-20243372620","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243374571":{"abstract":"Visualization Recommendation Systems (VRSs) are a novel and challenging field of study aiming to help generate insightful visualizations from data and support non-expert users in information discovery. Among the many contributions proposed in this area, some systems embrace the ambitious objective of imitating human analysts to identify relevant relationships in data and make appropriate design choices to represent these relationships with insightful charts. We denote these systems as \"agnostic\" VRSs since they do not rely on human-provided constraints and rules but try to learn the task autonomously. Despite the high application potential of agnostic VRSs, their progress is hindered by several obstacles, including the absence of standardized datasets to train recommendation algorithms, the difficulty of learning design rules, and defining quantitative criteria for evaluating the perceptual effectiveness of generated plots. This paper summarizes the literature on agnostic VRSs and outlines promising future research directions.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Luca Podo"],"doi":"10.1109/TVCG.2024.3374571","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243374571","time_end":"","time_stamp":"","time_start":"","title":"Agnostic Visual Recommendation Systems: Open Challenges and Future Directions","uid":"v-tvcg-20243374571","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243376406":{"abstract":"Visualizing event timelines for collaborative text writing is an important application for navigating and understanding such data, as time passes and the size and complexity of both text and timeline increase. They are often employed by applications such as code repositories and collaborative text editors. In this paper, we present a visualization tool to explore historical records of writing of legislative texts, which were discussed and voted on by an assembly of representatives. Our visualization focuses on event timelines from text documents that involve multiple people and different topics, allowing for observation of different proposed versions of said text or tracking data provenance of given text sections, while highlighting the connections between all elements involved. We also describe the process of designing such a tool alongside domain experts, with three steps of evaluation being conducted to verify the effectiveness of our design.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alfie Abdul-Rahman"],"doi":"10.1109/TVCG.2024.3376406","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Collaboration, History, Humanities, Writing, Navigation, Metadata"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243376406","time_end":"","time_stamp":"","time_start":"","title":"Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records","uid":"v-tvcg-20243376406","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243381453":{"abstract":"Scatterplots provide a visual representation of bivariate data (or 2D embeddings of multivariate data) that allows for effective analyses of data dependencies, clusters, trends, and outliers. Unfortunately, classical scatterplots suffer from scalability issues, since growing data sizes eventually lead to overplotting and visual clutter on a screen with a fixed resolution, which hinders the data analysis process. We propose an algorithm that compensates for irregular sample distributions by a smooth transformation of the scatterplot's visual domain. Our algorithm evaluates the scatterplot's density distribution to compute a regularization mapping based on integral images of the rasterized density function. The mapping preserves the samples' neighborhood relations. Few regularization iterations suffice to achieve a nearly uniform sample distribution that efficiently uses the available screen space. We further propose approaches to visually convey the transformation that was applied to the scatterplot and compare them in a user study. We present a novel parallel algorithm for fast GPU-based integral-image computation, which allows for integrating our de-cluttering approach into interactive visual data analysis systems.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hennes Rave"],"doi":"10.1109/TVCG.2024.3381453","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243381453","time_end":"","time_stamp":"","time_start":"","title":"De-cluttering Scatterplots with Integral Images","uid":"v-tvcg-20243381453","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243382607":{"abstract":"Advanced manufacturing creates increasingly complex objects with material compositions that are often difficult to characterize by a single modality. Our domain scientists are going beyond traditional methods by employing both X-ray and neutron computed tomography to obtain complementary representations expected to better resolve material boundaries. However, the use of two modalities creates its own challenges for visualization, requiring either complex adjustments of multimodal transfer functions or the need for multiple views. Together with experts in nondestructive evaluation, we designed a novel interactive multimodal visualization approach to create a combined view of the co-registered X-ray and neutron acquisitions of industrial objects. Using an automatic topological segmentation of the bivariate histogram of X-ray and neutron values as a starting point, the system provides a simple yet effective interface to easily create, explore, and adjust a multimodal isualization. We propose a widget with simple brushing interactions that enables the user to quickly correct the segmented histogram results. Our semiautomated system enables domain experts to intuitively explore large multimodal datasets without the need for either advanced segmentation algorithms or knowledge of visualization echniques. We demonstrate our approach using synthetic examples, industrial phantom objects created to stress multimodal scanning techniques, and real-world objects, and we discuss expert feedback.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xuan Huang"],"doi":"10.1109/TVCG.2024.3382607","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243382607","time_end":"","time_stamp":"","time_start":"","title":"Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data","uid":"v-tvcg-20243382607","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243382760":{"abstract":"Time-stamped event sequences (TSEQs) are time-oriented data without value information, shifting the focus of users to the exploration of temporal event occurrences. TSEQs exist in application domains, such as sleeping behavior, earthquake aftershocks, and stock market crashes. Domain experts face four challenges, for which they could use interactive and visual data analysis methods. First, TSEQs can be large with respect to both the number of sequences and events, often leading to millions of events. Second, domain experts need validated metrics and features to identify interesting patterns. Third, after identifying interesting patterns, domain experts contextualize the patterns to foster sensemaking. Finally, domain experts seek to reduce data complexity by data simplification and machine learning support. We present IVESA, a visual analytics approach for TSEQs. It supports the analysis of TSEQs at the granularities of sequences and events, supported with metrics and feature analysis tools. IVESA has multiple linked views that support overview, sort+filter, comparison, details-on-demand, and metadata relation-seeking tasks, as well as data simplification through feature analysis, interactive clustering, filtering, and motif detection and simplification. We evaluated IVESA with three case studies and a user study with six domain experts working with six different datasets and applications. Results demonstrate the usability and generalizability of IVESA across applications and cases that had up to 1,000,000 events.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["J\u00fcrgen Bernard"],"doi":"10.1109/TVCG.2024.3382760","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Time-Stamped Event Sequences, Time-Oriented Data, Visual Analytics, Data-First Design Study, Iterative Design, Visual Interfaces, User Evaluation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243382760","time_end":"","time_stamp":"","time_start":"","title":"Visual Analysis of Time-Stamped Event Sequences","uid":"v-tvcg-20243382760","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243383089":{"abstract":"The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qing Chen"],"doi":"10.1109/TVCG.2024.3383089","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Representation Learning, Multi-view Visualization, Visual Storytelling, Visualization Embedding"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243383089","time_end":"","time_stamp":"","time_start":"","title":"Chart2Vec: A Universal Embedding of Context-Aware Visualizations","uid":"v-tvcg-20243383089","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243385118":{"abstract":"Genomics is at the core of precision medicine, and there are high expectations on genomics-enabled improvement of patient outcomes in the years to come. Around the world, initiatives to increase the use of DNA sequencing in clinical routine are being deployed, such as the use of broad panels in the standard care for oncology patients. Such a development comes at the cost of increased demands on throughput in genomic data analysis. In this paper, we use the task of copy number variant (CNV) analysis as a context for exploring visualization concepts for clinical genomics. CNV calls are generated algorithmically, but time-consuming manual intervention is needed to separate relevant findings from irrelevant ones in the resulting large call candidate lists. We present a visualization environment, named Copycat, to support this review task in a clinical scenario. Key components are a scatter-glyph plot replacing the traditional list visualization, and a glyph representation designed for at-a-glance relevance assessments. Moreover, we present results from a formative evaluation of the prototype by domain specialists, from which we elicit insights to guide both prototype improvements and visualization for clinical genomics in general.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Emilia St\u00e5hlbom"],"doi":"10.1109/TVCG.2024.3385118","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, genomics, copy number variants, clinical decision support, evaluation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243385118","time_end":"","time_stamp":"","time_start":"","title":"Visualization for diagnostic review of copy number variants in complex DNA sequencing data","uid":"v-tvcg-20243385118","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243390219":{"abstract":"This system paper documents the technical foundations for the extension of the Topology ToolKit (TTK) to distributed-memory parallelism with the Message Passing Interface (MPI). While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a versatile approach (supporting both triangulated domains and regular grids) for the support of topological analysis pipelines, i.e. a sequence of topological algorithms interacting together, possibly on distinct numbers of processes. While developing this extension, we faced several algorithmic and software engineering challenges, which we document in this paper. We describe an MPI extension of TTK\u2019s data structure for triangulation representation and traversal, a central component to the global performance and generality of TTK\u2019s topological implementations. We also introduce an intermediate interface between TTK and MPI, both at the global pipeline level, and at the fine-grain algorithmic level. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Detailed performance analyses show that parallel efficiencies range from 20% to 80% (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (120 billion vertices) on a standard cluster with 64 nodes (for a total of 1536 cores). Finally, we provide a roadmap for the completion of TTK\u2019s MPI extension, along with generic recommendations for each algorithm communication category.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Julien Tierny"],"doi":"10.1109/TVCG.2024.3390219","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Topological data analysis, high-performance computing, distributed-memory algorithms."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243390219","time_end":"","time_stamp":"","time_start":"","title":"TTK is Getting MPI-Ready","uid":"v-tvcg-20243390219","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243392476":{"abstract":"Areas of interest (AOIs) are well-established means of providing semantic information for visualizing, analyzing, and classifying gaze data. However, the usual manual annotation of AOIs is time consuming and further impaired by ambiguities in label assignments. To address these issues, we present an interactive labeling approach that combines visualization, machine learning, and user-centered explainable annotation. Our system provides uncertainty-aware visualization to build trust in classification with an increasing number of annotated examples. It combines specifically designed EyeFlower glyphs, dimensionality reduction, and selection and exploration techniques in an integrated workflow. The approach is versatile and hardware-agnostic, supporting video stimuli from stationary and unconstrained mobile eye trackin alike. We conducted an expert review to assess labeling strategies and trust building.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maurice Koch"],"doi":"10.1109/TVCG.2024.3392476","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, eye tracking, uncertainty, active learning, trust building"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243392476","time_end":"","time_stamp":"","time_start":"","title":"Active Gaze Labeling: Visualization for Trust Building","uid":"v-tvcg-20243392476","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243392587":{"abstract":"The issue of traffic congestion poses a significant obstacle to the development of global cities. One promising solution to tackle this problem is intelligent traffic signal control (TSC). Recently, TSC strategies leveraging reinforcement learning (RL) have garnered attention among researchers. However, the evaluation of these models has primarily relied on fixed metrics like reward and queue length. This limited evaluation approach provides only a narrow view of the model\u2019s decision-making process, impeding its practical implementation. Moreover, effective TSC necessitates coordinated actions across multiple intersections. Existing visual analysis solutions fall short when applied in multi-agent settings. In this study, we delve into the challenge of interpretability in multi-agent reinforcement learning (MARL), particularly within the context of TSC. We propose MARLens, a visual analytics system tailored to understand MARL-based TSC. Our system serves as a versatile platform for both RL and TSC researchers. It empowers them to explore the model\u2019s features from various perspectives, revealing its decision-making processes and shedding light on interactions among different agents. To facilitate quick identification of critical states, we have devised multiple visualization views, complemented by a traffic simulation module that allows users to replay specific training scenarios. To validate the utility of our proposed system, we present three comprehensive case studies, incorporate insights from domain experts through interviews, and conduct a user study. These collective efforts underscore the feasibility and effectiveness of MARLens in enhancing our understanding of MARL-based TSC systems and pave the way for more informed and efficient traffic management strategies.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haipeng Zeng"],"doi":"10.1109/TVCG.2024.3392587","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Traffic signal control, multi-agent, reinforcement learning, visual analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243392587","time_end":"","time_stamp":"","time_start":"","title":"MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics","uid":"v-tvcg-20243392587","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243394745":{"abstract":"The fund investment industry heavily relies on the expertise of fund managers, who bear the responsibility of managing portfolios on behalf of clients. With their investment knowledge and professional skills, fund managers gain a competitive advantage over the average investor in the market. Consequently, investors prefer entrusting their investments to fund managers rather than directly investing in funds. For these investors, the primary concern is selecting a suitable fund manager. While previous studies have employed quantitative or qualitative methods to analyze various aspects of fund managers, such as performance metrics, personal characteristics, and performance persistence, they often face challenges when dealing with a large candidate space. Moreover, distinguishing whether a fund manager's performance stems from skill or luck poses a challenge, making it difficult to align with investors' preferences in the selection process. To address these challenges, this study characterizes the requirements of investors in selecting suitable fund managers and proposes an interactive visual analytics system called FMLens. This system streamlines the fund manager selection process, allowing investors to efficiently assess and deconstruct fund managers' investment styles and abilities across multiple dimensions. Additionally, the system empowers investors to scrutinize and compare fund managers' performances. The effectiveness of the approach is demonstrated through two case studies and a qualitative user study. Feedback from domain experts indicates that the system excels in analyzing fund managers from diverse perspectives, enhancing the efficiency of fund manager evaluation and selection.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Longfei Chen"],"doi":"10.1109/TVCG.2024.3394745","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Financial Data, Fund Manager Selection, Visual Analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243394745","time_end":"","time_stamp":"","time_start":"","title":"FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments","uid":"v-tvcg-20243394745","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243397004":{"abstract":"Data charts are prevalent across various fields due to their efficacy in conveying complex data relationships. However, static charts may sometimes struggle to engage readers and efficiently present intricate information, potentially resulting in limited understanding. We introduce \u201cLive Charts,\u201d a new format of presentation that decomposes complex information within a chart and explains the information pieces sequentially through rich animations and accompanying audio narration. We propose an automated approach to revive static charts into Live Charts. Our method integrates GNN-based techniques to analyze the chart components and extract data from charts. Then we adopt large natural language models to generate appropriate animated visuals along with a voice-over to produce Live Charts from static ones. We conducted a thorough evaluation of our approach, which involved the model performance, use cases, a crowd-sourced user study, and expert interviews. The results demonstrate Live Charts offer a multi-sensory experience where readers can follow the information and understand the data insights better. We analyze the benefits and drawbacks of Live Charts over static charts as a new information consumption experience.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lu Ying"],"doi":"10.1109/TVCG.2024.3397004","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Charts, storytelling, machine learning, automatic visualization"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243397004","time_end":"","time_stamp":"","time_start":"","title":"Reviving Static Charts into Live Charts","uid":"v-tvcg-20243397004","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243402610":{"abstract":"Point clouds are widely used as a versatile representation of 3D entities and scenes for all scale domains and in a variety of application areas, serving as a fundamental data category to directly convey spatial features. However, due to point sparsity, lack of structure, irregular distribution, and acquisition-related inaccuracies, results of point cloud visualization are often subject to visual complexity and ambiguity. In this regard, non-photorealistic rendering can improve visual communication by reducing the cognitive effort required to understand an image or scene and by directing attention to important features. In the last 20 years, this has been demonstrated by various non-photorealistic r rendering approaches that were proposed to target point clouds specifically. However, they do not use a common language or structure for assessment which complicates comparison and selection. Further, recent developments regarding point cloud characteristics and processing, such as massive data size or web-based rendering are rarely considered. To address these issues, we present a survey on non-photorealistic rendering approaches for point cloud visualization, providing an overview of the current state of research. We derive a structure for the assessment of approaches, proposing seven primary dimensions for the categorization regarding intended goals, data requirements, used techniques, and mode of operation. We then systematically assess corresponding approaches and utilize this classification to identify trends and research gaps, motivating future research in the development of effective non-photorealistic point cloud rendering methods.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ole Wegen"],"doi":"10.1109/TVCG.2024.3402610","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Point clouds, survey, non-photorealistic rendering"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243402610","time_end":"","time_stamp":"","time_start":"","title":"A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization","uid":"v-tvcg-20243402610","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243402834":{"abstract":"Impact dynamics are crucial for estimating the growth patterns of NFT projects by tracking the diffusion and decay of their relative appeal among stakeholders. Machine learning methods for impact dynamics analysis are incomprehensible and rigid in terms of their interpretability and transparency, whilst stakeholders require interactive tools for informed decision-making. Nevertheless, developing such a tool is challenging due to the substantial, heterogeneous NFT transaction data and the requirements for flexible, customized interactions. To this end, we integrate intuitive visualizations to unveil the impact dynamics of NFT projects. We first conduct a formative study and summarize analysis criteria, including substitution mechanisms, impact attributes, and design requirements from stakeholders. Next, we propose the Minimal Substitution Model to simulate substitutive systems of NFT projects that can be feasibly represented as node-link graphs. Particularly, we utilize attribute-aware techniques to embed the project status and stakeholder behaviors in the layout design. Accordingly, we develop a multi-view visual analytics system, namely NFTracer, allowing interactive analysis of impact dynamics in NFT transactions. We demonstrate the informativeness, effectiveness, and usability of NFTracer by performing two case studies with domain experts and one user study with stakeholders. The studies suggest that NFT projects featuring a higher degree of similarity are more likely to substitute each other. The impact of NFT projects within substitutive systems is contingent upon the degree of stakeholders\u2019 influx and projects\u2019 freshness.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yifan Cao"],"doi":"10.1109/TVCG.2024.3402834","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Stakeholders, Nonfungible Tokens, Social Networking Online, Visual Analytics, Network Analyzers, Measurement, Layout, Impact Dynamics Analysis, Non Fungible Tokens NF Ts, NFT Transaction Data, Substitutive Systems, Visual Analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243402834","time_end":"","time_stamp":"","time_start":"","title":"Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics","uid":"v-tvcg-20243402834","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243406387":{"abstract":"The process of labeling medical text plays a crucial role in medical research. Nonetheless, creating accurately labeled medical texts of high quality is often a time-consuming task that requires specialized domain knowledge. Traditional methods for generating labeled data typically rely on rigid rule-based approaches, which may not adapt well to new tasks. While recent machine learning (ML) methodologies have mitigated the manual labeling efforts, configuring models to align with specific research requirements can be challenging for labelers without technical expertise. Moreover, automated labeling techniques, such as transfer learning, face difficulties in in directly incorporating expert input, whereas semi-automated methods, like data programming, allow knowledge integration through rules or knowledge bases but may lack continuous result refinement throughout the entire labeling process. In this study, we present a collaborative human-ML teaming workflow that seamlessly integrates visual cluster analysis and active learning to assist domain experts in labeling medical text with high efficiency. Additionally, we introduce an innovative neural network model called the embedding network, which incorporates expert insights to generate task-specific embeddings for medical texts. We integrate the workflow and embedding network into a visual analytics tool named KMTLabeler, equipped with coordinated multi-level views and interactions. Two illustrative case studies, along with a controlled user study, provide substantial evidence of the effectiveness of KMTLabeler in creating an efficient labeling environment for medical text classification.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["He Wang"],"doi":"10.1109/TVCG.2024.3406387","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Medical Text Labeling, Expert Knowledge, Embedding Network, Visual Cluster Analysis, Active Learning"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243406387","time_end":"","time_stamp":"","time_start":"","title":"KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification","uid":"v-tvcg-20243406387","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243408255":{"abstract":"Generative text-to-image models, which allow users to create appealing images through a text prompt, have seen a dramatic increase in popularity in recent years. However, most users have a limited understanding of how such models work and often rely on trial and error strategies to achieve satisfactory results. The prompt history contains a wealth of information that could provide users with insights into what has been explored and how the prompt changes impact the output image, yet little research attention has been paid to the visual analysis of such process to support users. We propose the Image Variant Graph, a novel visual representation designed to support comparing prompt-image pairs and exploring the editing history. The Image Variant Graph models prompt differences as edges between corresponding images and presents the distances between images through projection. Based on the graph, we developed the PrompTHis system through co-design with artists. Based on the review and analysis of the prompting history, users can better understand the impact of prompt changes and have a more effective control of image generation. A quantitative user study and qualitative interviews demonstrate that PrompTHis can help users review the prompt history, make sense of the model, and plan their creative process.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuhan Guo"],"doi":"10.1109/TVCG.2024.3408255","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Text visualization, image visualization, text-to-image generation, editing history, provenance, generative art"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243408255","time_end":"","time_stamp":"","time_start":"","title":"PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation","uid":"v-tvcg-20243408255","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243411575":{"abstract":"Creating an animated data video with audio narration is a time-consuming and complex task that requires expertise. It involves designing complex animations, turning written scripts into audio narrations, and synchronizing visual changes with the narrations. This paper presents WonderFlow, an interactive authoring tool, that facilitates narration-centric design of animated data videos. WonderFlow allows authors to easily specify semantic links between text and the corresponding chart elements. Then it automatically generates audio narration by leveraging text-to-speech techniques and aligns the narration with an animation. WonderFlow provides a structure-aware animation library designed to ease chart animation creation, enabling authors to apply pre-designed animation effects to common visualization components. Additionally, authors can preview and refine their data videos within the same system, without having to switch between different creation tools. A series of evaluation results confirmed that WonderFlow is easy to use and simplifies the creation of data videos with narration-animation interplay.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leixian Shen"],"doi":"10.1109/TVCG.2024.3411575","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data video, Data visualization, Narration-animation interplay, Storytelling, Authoring tool"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243411575","time_end":"","time_stamp":"","time_start":"","title":"WonderFlow: Narration-Centric Design of Animated Data Videos","uid":"v-tvcg-20243411575","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243411786":{"abstract":"We present a novel method for the interactive construction and rendering of extremely large molecular scenes, capable of representing multiple biological cells in atomistic detail. Our method is tailored for scenes, which are procedurally constructed, based on a given set of building rules. Rendering of large scenes normally requires the entire scene available in-core, or alternatively, it requires out-of-core management to load data into the memory hierarchy as a part of the rendering loop. Instead of out-of-core memory management, we propose to procedurally generate the scene on-demand on the fly. The key idea is a positional- and view-dependent procedural scene-construction strategy, where only a fraction of the atomistic scene around the camera is available in the GPU memory at any given time. The atomistic detail is populated into a uniform-space partitioning using a grid that covers the entire scene. Most of the grid cells are not filled with geometry, only those are populated that are potentially seen by the camera. The atomistic detail is populated in a compute shader and its representation is connected with acceleration data structures for hardware ray-tracing of modern GPUs. Objects which are far away, where atomistic detail is not perceivable from a given viewpoint, are represented by a triangle mesh mapped with a seamless texture, generated from the rendering of geometry from atomistic detail. The algorithm consists of two pipelines, the construction-compute pipeline, and the rendering pipeline, which work together to render molecular scenes at an atomistic resolution far beyond the limit of the GPU memory containing trillions of atoms. We demonstrate our technique on multiple models of SARS-CoV-2 and the red blood cell.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ruwayda Alharbi"],"doi":"10.1109/TVCG.2024.3411786","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Interactive rendering, view-guided scene construction, biological data, hardware ray tracing"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243411786","time_end":"","time_stamp":"","time_start":"","title":"\u201cNanomatrix: Scalable Construction of Crowded Biological Environments\u201d","uid":"v-tvcg-20243411786","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243413195":{"abstract":"With the growing complexity and volume of data, visualizations have become more intricate, often requiring advanced techniques to convey insights. These complex charts are prevalent in everyday life, and individuals who lack knowledge in data visualization may find them challenging to understand. This paper investigates using Large Language Models (LLMs) to help users with low data literacy understand complex visualizations. While previous studies focus on text interactions with users, we noticed that visual cues are also critical for interpreting charts. We introduce an LLM application that supports both text and visual interaction for guiding chart interpretation. Our study with 26 participants revealed that the in-situ support effectively assisted users in interpreting charts and enhanced learning by addressing specific chart-related questions and encouraging further exploration. Visual communication allowed participants to convey their interests straightforwardly, eliminating the need for textual descriptions. However, the LLM assistance led users to engage less with the system, resulting in fewer insights from the visualizations. This suggests that users, particularly those with lower data literacy and motivation, may have over-relied on the LLM agent. We discuss opportunities for deploying LLMs to enhance visualization literacy while emphasizing the need for a balanced approach.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kiroong Choe"],"doi":"10.1109/TVCG.2024.3413195","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization literacy, Large language model, Visual communication"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243413195","time_end":"","time_stamp":"","time_start":"","title":"Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation","uid":"v-tvcg-20243413195","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}} +{"a-ldav-1002":{"abstract":"Cuneiform is the earliest known system of writing, first developed for the Sumerian language of southern Mesopotamia in the second half of the 4th millennium BC. Cuneiform signs are obtained by impressing a stylus on fresh clay tablets. For certain purposes, e.g. authentication by seal imprint, some cuneiform tablets were enclosed in clay envelopes, which cannot be opened without destroying them. The aim of our interdisciplinary project is the non-invasive study of clay tablets. A portable X-ray micro-CT scanner is developed to acquire density data of such artifacts on a high-resolution, regular 3D grid at collection sites. The resulting volume data is processed through feature-preserving denoising, extraction of high-accuracy surfaces using a manifold dual marching cubes algorithm and extraction of local features by enhanced curvature rendering and ambient occlusion. For the non-invasive study of cuneiform inscriptions, the tablet is virtually separated from its envelope by curvature-based segmentation. The computational- and data-intensive algorithms are optimized for near-real-time offline usage with limited resources at collection sites. To visualize the complexity-reduced and octree-based compressed representation of surfaces, we develop and implement an interactive application. To facilitate the analysis of such clay tablets, we implement shape-based feature extraction algorithms to enhance cuneiform recognition. Our workflow supports innovative 3D display and interaction techniques such as autostereoscopic displays and gesture control.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"stephan.olbrich@uni-hamburg.de","is_corresponding":true,"name":"Stephan Olbrich"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"andreas.beckert@uni-hamburg.de","is_corresponding":false,"name":"Andreas Beckert"},{"affiliations":["Centre National de la Recherche Scientifique (CNRS), Nanterre, France"],"email":"cecile.michel@cnrs.fr","is_corresponding":false,"name":"C\u00e9cile Michel"},{"affiliations":["Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany","Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"christian.schroer@desy.de","is_corresponding":false,"name":"Christian Schroer"},{"affiliations":["Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany","Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"samaneh.ehteram@desy.de","is_corresponding":false,"name":"Samaneh Ehteram"},{"affiliations":["Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany"],"email":"andreas.schropp@desy.de","is_corresponding":false,"name":"Andreas Schropp"},{"affiliations":["Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany"],"email":"philipp.paetzold@desy.de","is_corresponding":false,"name":"Philipp Paetzold"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Stephan Olbrich"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"a-ldav0","slot_id":"a-ldav-1002","time_end":"","time_stamp":"","time_start":"","title":"Efficient Analysis and Visualization of High-Resolution Computed Tomography Data for the Exploration of Enclosed Cuneiform Tablets","uid":"a-ldav-1002","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"a-ldav-1003":{"abstract":"Dimensionality reduction (DR) is a well-established approach for the visualization of high-dimensional data sets. While DR methods are often applied to typical DR benchmark data sets in the literature, they might suffer from high runtime complexity and memory requirements, making them unsuitable for large data visualization especially in environments outside of high-performance computing. To perform DR on large data sets, we propose the use of out-of-sample extensions. Such extensions allow inserting new data into existing projections, which we leverage to iteratively project data into a reference projection that consists only of a small manageable subset. This process makes it possible to perform DR out-of-core on large data, which would otherwise not be possible due to memory and runtime limitations. For metric multidimensional scaling (MDS), we contribute an implementation with out-of-sample projection capability since typical software libraries do not support it. We provide an evaluation of the projection quality of five common DR algorithms (MDS, PCA, t-SNE, UMAP, and autoencoders) using quality metrics from the literature and analyze the trade-off between the size of the reference set and projection quality. The runtime behavior of the algorithms is also quantified with respect to reference set size, out-of-sample batch size, and dimensionality of the data sets. Furthermore, we compare the out-of-sample approach to other recently introduced DR methods, such as PaCMAP and TriMAP, which claim to handle larger data sets than traditional approaches. To showcase the usefulness of DR on this large scale, we contribute a use case where we analyze ensembles of streamlines amounting to one billion projected instances.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e4t Stuttgart, Stuttgart, Germany"],"email":"lucareichmann01@gmail.com","is_corresponding":false,"name":"Luca Marcel Reichmann"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"david.haegele@visus.uni-stuttgart.de","is_corresponding":true,"name":"David H\u00e4gele"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["David H\u00e4gele"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"a-ldav0","slot_id":"a-ldav-1003","time_end":"","time_stamp":"","time_start":"","title":"Out-of-Core Dimensionality Reduction for Large Data via Out-of-Sample Extensions","uid":"a-ldav-1003","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"a-ldav-1006":{"abstract":"Scientists generate petabytes of data daily to help uncover environmental trends or behaviors that are hard to predict. For example, understanding climate simulations based on the long-term average of temperature, precipitation, and other environmental variables is essential to predicting and establishing root causes of future undesirable scenarios and assessing possible mitigation strategies. Unfortunately, bottlenecks in petascale workflows restrict scientists' ability to analyze and visualize the necessary information due to requirements for extensive computational resources, obstacles in data accessibility, and inefficient analysis algorithms. This paper presents an approach to managing, visualizing, and analyzing petabytes of data within a browser on equipment ranging from the top NASA supercomputer to commodity hardware like a laptop. Our approach is based on a novel data fabric abstraction layer that allows querying scientific information in a form that is user-friendly while hiding the complexities of dealing with file systems or cloud services. We also optimize network utilization while streaming from petascale repositories through state-of-the-art progressive compression algorithms. Based on this abstraction, we provide customizable dashboards that can be accessed from any device with an internet connection, offering straightforward access to vast amounts of data typically not available to those without access to uniquely expensive hardware resources. Our dashboards provide and improve the ability to access and, more importantly, use massive data for a wide range of users, from top scientists with access to leadership-class computing environments to undergraduate students of disadvantaged backgrounds from minority-serving institutions. We focus on NASA's use of petascale climate datasets as an example of particular societal impact and, therefore, a case where achieving equity in science participation is critical. In particular, we validate our approach by improving the ability of climate scientist to explore their data even on the top NASA supercomputer, introducing the ability to study their data in a fully interactive environment instead of being limited to using pre-choreographed videos that can take days to generate each. We also successfully introduced the same dashboards and simplified training material in an undergraduate class on Geospatial Analysis in a minority-serving campus (Utah State Banding) with 69% of the Native American students and 86% being low-income. The same dashboards are also released in simplified form to the general public, providing an unparalleled democratization for the access and use of climate data that can be extended to most scientific domains.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"aashishpanta0@gmail.com","is_corresponding":true,"name":"Aashish Panta"},{"affiliations":["Scientific Computing and Imaging Institute, Salt Lake City, United States"],"email":"xuanhuang@sci.utah.edu","is_corresponding":false,"name":"Xuan Huang"},{"affiliations":["NASA Ames Research Center, Mountain View, United States"],"email":"nina.mccurdy@gmail.com","is_corresponding":false,"name":"Nina McCurdy"},{"affiliations":["NASA, mountain View, United States"],"email":"david.ellsworth@nasa.gov","is_corresponding":false,"name":"David Ellsworth"},{"affiliations":["university of Utah, Salt lake city, United States"],"email":"amy.a.gooch@gmail.com","is_corresponding":false,"name":"Amy Gooch"},{"affiliations":["university of Utah, Salt lake city, United States"],"email":"scrgiorgio@gmail.com","is_corresponding":false,"name":"Giorgio Scorzelli"},{"affiliations":["NASA, Pasadena, United States"],"email":"hector.torres.gutierrez@jpl.nasa.gov","is_corresponding":false,"name":"Hector Torres"},{"affiliations":["caltech, Pasadena, United States"],"email":"pklein@caltech.edu","is_corresponding":false,"name":"Patrice Klein"},{"affiliations":["Utah State University Blanding, Blanding, United States"],"email":"gustavo.ovando@usu.edu","is_corresponding":false,"name":"Gustavo Ovando-Montejo"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"pascucci.valerio@gmail.com","is_corresponding":false,"name":"Valerio Pascucci"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Aashish Panta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"a-ldav0","slot_id":"a-ldav-1006","time_end":"","time_stamp":"","time_start":"","title":"Web-based Visualization and Analytics of Petascale data: Equity as a Tide that Lifts All Boats","uid":"a-ldav-1006","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"a-ldav-1011":{"abstract":"This paper describes the adaptation of a well-scaling parallel algorithm for computing Morse-Smale segmentations based on path compression to a distributed computational setting. Additionally, we extend the algorithm to efficiently compute connected components in distributed structured and unstructured grids, based either on the connectivity of the underlying mesh or a feature mask. Our implementation is seamlessly integrated with the distributed extension of the Topology ToolKit (TTK), ensuring robust performance and scalability. To demonstrate the practicality and efficiency of our algorithms, we conducted a series of scaling experiments on large-scale datasets, with sizes of up to 4096^3 vertices on up to 64 nodes and 768 cores.","accessible_pdf":false,"authors":[{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"mswill@rhrk.uni-kl.de","is_corresponding":true,"name":"Michael Will"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"jl@jluk.de","is_corresponding":false,"name":"Jonas Lukasczyk"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"garth@rptu.de","is_corresponding":false,"name":"Christoph Garth"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Will"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"a-ldav0","slot_id":"a-ldav-1011","time_end":"","time_stamp":"","time_start":"","title":"Distributed Path Compression for Piecewise Linear Morse-Smale Segmentations and Connected Components","uid":"a-ldav-1011","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"a-ldav-1016":{"abstract":"We propose and discuss a paradigm that allows for expressing data- parallel rendering with the classically non-parallel ANARI API. We propose this as a new standard for data-parallel rendering, describe two different implementations of this paradigm, and use multiple sample integrations into existing applications to show how easy it is to adopt, and what can be gained from doing so.","accessible_pdf":false,"authors":[{"affiliations":["NVIDIA, Salt Lake City, United States"],"email":"ingowald@gmail.com","is_corresponding":false,"name":"Ingo Wald"},{"affiliations":["University of Cologne, Cologne, Germany"],"email":"zellmann@uni-koeln.de","is_corresponding":true,"name":"Stefan Zellmann"},{"affiliations":["NVIDIA, Austin, United States"],"email":"jeffamstutz@gmail.com","is_corresponding":false,"name":"Jefferson Amstutz"},{"affiliations":["University of California, Davis, Davis, United States"],"email":"qadwu@ucdavis.edu","is_corresponding":false,"name":"Qi Wu"},{"affiliations":["NVIDIA, Santa Clara, United States"],"email":"kgriffin@nvidia.com","is_corresponding":false,"name":"Kevin Shawn Griffin"},{"affiliations":["VSB - Technical University of Ostrava, Ostrava, Czech Republic"],"email":"milan.jaros@vsb.cz","is_corresponding":false,"name":"Milan Jaro\u0161"},{"affiliations":["University of Cologne, Cologne, Germany"],"email":"wesner@uni-koeln.de","is_corresponding":false,"name":"Stefan Wesner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Stefan Zellmann"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"a-ldav0","slot_id":"a-ldav-1016","time_end":"","time_stamp":"","time_start":"","title":"Standardized Data-Parallel Rendering Using ANARI","uid":"a-ldav-1016","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"a-ldav-1018":{"abstract":"Functional approximation as a high-order continuous representation provides a more accurate value and gradient query compared to the traditional discrete volume representation. Volume visualization directly rendered from functional approximation generates high-quality rendering results without high-order artifacts caused by trilinear interpolations. However, querying an encoded functional approximation is computationally expensive, especially when the input dataset is large, making functional approximation impractical for interactive visualization. In this paper, we proposed a novel functional approximation multi-resolution representation, Adaptive-FAM, which is lightweight and fast to query. We also design a GPU-accelerated out-of-core multi-resolution volume visualization framework that directly utilizes the Adaptive-FAM representation to generate high-quality rendering with interactive responsiveness. Our method can not only dramatically decrease the caching time, one of the main contributors to input latency, but also effectively improve the cache hit rate through prefetching. Our approach significantly outperforms the traditional function approximation method in terms of input latency while maintaining comparable rendering quality.","accessible_pdf":false,"authors":[{"affiliations":["University of Nebraska-Lincoln, Lincoln, United States"],"email":"jianxin.sun@huskers.unl.edu","is_corresponding":true,"name":"Jianxin Sun"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"dlenz@anl.gov","is_corresponding":false,"name":"David Lenz"},{"affiliations":["University of Nebraska-Lincoln, Lincoln, United States"],"email":"yu@cse.unl.edu","is_corresponding":false,"name":"Hongfeng Yu"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"tpeterka@mcs.anl.gov","is_corresponding":false,"name":"Tom Peterka"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jianxin Sun"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"a-ldav0","slot_id":"a-ldav-1018","time_end":"","time_stamp":"","time_start":"","title":"Adaptive Multi-Resolution Encoding for Interactive Large-Scale Volume Visualization through Functional Approximation","uid":"a-ldav-1018","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"s-vds-1000":{"abstract":"Efficient public transport systems are crucial for sustainable urban development as cities face increasing mobility demands. Yet, many public transport networks struggle to meet diverse user needs due to historical development, urban constraints, and financial limitations. Traditionally, planning of transport network structure is often based on limited surveys, expert opinions, or partial usage statistics. This provides an incomplete basis for decision-making. We introduce an data-driven approach to public transport planning and optimization, calculating detailed accessibility measures at the individual housing level. Our visual analytics workflow combines population-group-based simulations with dynamic infrastructure analysis, utilizing a scenario-based model to simulate daily travel patterns of varied demographic groups, including schoolchildren, students, workers, and pensioners. These population groups, each with unique mobility requirements and routines, interact with the transport system under different scenarios traveling to and from Points of Interest (POI), assessed through travel time calculations. Results are visualized through heatmaps, density maps, and network overlays, as well as detailed statistics. Our system allows us to analyze both the underlying data and simulation results on multiple levels of granularity, delivering both broad insights and granular details. Case studies with the city of Konstanz, Germany reveal key areas where public transport does not meet specific needs, confirmed through a formative user study. Due to the high cost of changing legacy networks, our analysis facilitates the identification of strategic enhancements, such as optimized schedules or rerouting, and few targeted stop relocations, highlighting consequential variations in accessibility to pinpointing critical service gaps. Our research advances urban transport analytics by providing policymakers and citizens with a system that delivers both broad insights with granular detail into public transport services for a data-driven quality assessment at housing-level detail.","accessible_pdf":false,"authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"yannick.metz@uni-konstanz.de","is_corresponding":false,"name":"Yannick Metz"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"dennis-fabian.ackermann@uni-konstanz.de","is_corresponding":false,"name":"Dennis Ackermann"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"max.fischer@uni-konstanz.de","is_corresponding":true,"name":"Maximilian T. Fischer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maximilian T. Fischer"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"s-vds0","slot_id":"s-vds-1000","time_end":"","time_stamp":"","time_start":"","title":"Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent","uid":"s-vds-1000","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"s-vds-1002":{"abstract":"This position paper explores the interplay between automation and human involvement in data science. It synthesizes perspectives from Automated Data Science (AutoDS) and Interactive Data Visualization (VIS), which traditionally represent opposing ends of the human-machine spectrum. While AutoDS aims to enhance efficiency by reducing human tasks, VIS emphasizes the importance of nuanced understanding, innovation, and context provided by human involvement. This paper examines these dichotomies through an online survey and advocates for a balanced approach that harmonizes the efficiency of automation with the irreplaceable insights of human expertise. Ultimately, we address the essential question of not just what we can automate, but what we should automate, seeking strategies that prioritize technological advancement alongside the fundamental need for human oversight.","accessible_pdf":false,"authors":[{"affiliations":["Tufts University, Boston, United States"],"email":"jen@cs.tufts.edu","is_corresponding":true,"name":"Jen Rogers"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"mehdi.chakhchoukh@universite-paris-saclay.fr","is_corresponding":false,"name":"Mehdi Chakhchoukh"},{"affiliations":["Leiden Universiteit, Leiden, Netherlands"],"email":"anastacio@aim.rwth-aachen.de","is_corresponding":false,"name":"Marie Anastacio"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":false,"name":"Rebecca Faust"},{"affiliations":["University of Warwick, Coventry, United Kingdom"],"email":"cagatay.turkay@warwick.ac.uk","is_corresponding":false,"name":"Cagatay Turkay"},{"affiliations":["University of Wyoming, Laramie, United States"],"email":"larsko@uwyo.edu","is_corresponding":false,"name":"Lars Kotthoff"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"steffen.koch@vis.uni-stuttgart.de","is_corresponding":false,"name":"Steffen Koch"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"andreas.kerren@liu.se","is_corresponding":false,"name":"Andreas Kerren"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"bernard@ifi.uzh.ch","is_corresponding":false,"name":"J\u00fcrgen Bernard"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jen Rogers"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"s-vds0","slot_id":"s-vds-1002","time_end":"","time_stamp":"","time_start":"","title":"Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop","uid":"s-vds-1002","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"s-vds-1007":{"abstract":"Categorical data does not have an intrinsic definition of distance or order, and therefore, established visualization techniques for categorical data only allow for a set-based or frequency-based analysis, e.g., through Euler diagrams or Parallel Sets, and do not support a similarity-based analysis. We present a novel dimensionality reduction-based visualization for categorical data, which is based on defining the distance of two data items as the number of varying attributes. Our technique enables users to pre-attentively detect groups of similar data items and observe the properties of the projection, such as attributes strongly influencing the embedding. Our prototype visually encodes data properties in an enhanced scatterplot-like visualization, visualizing attributes in the background to show the distribution of categories. In addition, we propose two graph-based measures to quantify the plot's visual quality, which rank attributes according to their contribution to cluster cohesion. To demonstrate the capabilities of our similarity-based projection method, we compare it to Euler diagrams and Parallel Sets regarding visual scalability and evaluate it quantitatively on seven real-world datasets using a range of common quality measures. Further, we validate the benefits of our approach through an expert study with five data scientists analyzing the Titanic and Mushroom dataset with up to 23 attributes and 8124 category combinations. Our results indicate that our Categorical Data Map offers an effective analysis method for large datasets with a high number of category combinations.","accessible_pdf":false,"authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"frederik.dennig@uni-konstanz.de","is_corresponding":true,"name":"Frederik L. Dennig"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"lucas.joos@uni-konstanz.de","is_corresponding":false,"name":"Lucas Joos"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"patrick.paetzold@uni-konstanz.de","is_corresponding":false,"name":"Patrick Paetzold"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"blumbergdaniela@gmail.com","is_corresponding":false,"name":"Daniela Blumberg"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"oliver.deussen@uni-konstanz.de","is_corresponding":false,"name":"Oliver Deussen"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"max.fischer@uni-konstanz.de","is_corresponding":false,"name":"Maximilian T. Fischer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Frederik L. Dennig"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"s-vds0","slot_id":"s-vds-1007","time_end":"","time_stamp":"","time_start":"","title":"The Categorical Data Map: A Multidimensional Scaling-Based Approach","uid":"s-vds-1007","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"s-vds-1013":{"abstract":"Clustering is an essential technique across various domains, such as data science, machine learning, and eXplainable Artificial Intelligence. Information visualization and visual analytics techniques have been proven to effectively support human involvement in the visual exploration of clustered data to enhance the understanding and refinement of cluster assignments. This paper presents an attempt of a deep and exhaustive evaluation of the perceptive aspects of clustering quality metrics, focusing on the Davies-Bouldin Index, Dunn Index, Calinski-Harabasz Index, and Silhouette Score. Our research is centered around two main objectives: a) assessing the human perception of common CVIs in 2D scatterplots and b) exploring the potential of Large Language Models (LLMs), in particular GPT-4o, to emulate the assessed human perception. By discussing the obtained results, highlighting limitations, and areas for further exploration, this paper aims to propose a foundation for future research activities.","accessible_pdf":false,"authors":[{"affiliations":["Sapienza University of Rome, Rome, Italy"],"email":"blasilli@diag.uniroma1.it","is_corresponding":true,"name":"Graziano Blasilli"},{"affiliations":["Northeastern University, Boston, United States"],"email":"kerrigan.d@northeastern.edu","is_corresponding":false,"name":"Daniel Kerrigan"},{"affiliations":["Northeastern University, Boston, United States"],"email":"e.bertini@northeastern.edu","is_corresponding":false,"name":"Enrico Bertini"},{"affiliations":["Sapienza University of Rome, Rome, Italy"],"email":"santucci@diag.uniroma1.it","is_corresponding":false,"name":"Giuseppe Santucci"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Graziano Blasilli"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"s-vds0","slot_id":"s-vds-1013","time_end":"","time_stamp":"","time_start":"","title":"Towards a Visual Perception-Based Analysis of Clustering Quality Metrics","uid":"s-vds-1013","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"s-vds-1021":{"abstract":"Recommender systems have become integral to digital experiences, shaping user interactions and preferences across various platforms. Despite their widespread use, these systems often suffer from algorithmic biases that can lead to unfair and unsatisfactory user experiences. This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems. By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration, stereotypes, and filter bubbles affect their recommendations. Informed by in-depth user interviews, both general users and researchers can benefit from increased transparency and personalized impact assessments, ultimately fostering a better understanding of algorithmic biases and contributing to more equitable recommendation outcomes. This work provides valuable insights for future research and practical applications in mitigating bias and enhancing fairness in machine learning algorithms.","accessible_pdf":false,"authors":[{"affiliations":["University of Pittsburgh, Pittsburgh, United States"],"email":"yongsu.ahn@pitt.edu","is_corresponding":true,"name":"Yongsu Ahn"},{"affiliations":["School of Computing and Information, University of Pittsburgh, Pittsburgh, United States"],"email":"quinnkwolter@gmail.com","is_corresponding":false,"name":"Quinn K Wolter"},{"affiliations":["Quest Diagnostics, Pittsburgh, United States"],"email":"jonilyndick@gmail.com","is_corresponding":false,"name":"Jonilyn Dick"},{"affiliations":["Quest Diagnostics, Pittsburgh, United States"],"email":"janetad99@gmail.com","is_corresponding":false,"name":"Janet Dick"},{"affiliations":["University of Pittsburgh, Pittsburgh, United States"],"email":"yurulin@pitt.edu","is_corresponding":false,"name":"Yu-Ru Lin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yongsu Ahn"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"s-vds0","slot_id":"s-vds-1021","time_end":"","time_stamp":"","time_start":"","title":"Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems","uid":"s-vds-1021","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"s-vds-1029":{"abstract":"This position paper discusses the profound impact of Large Language Models (LLMs) on semantic change, emphasizing the need for comprehensive monitoring and visualization techniques. Building on established concepts from linguistics, we examine the interdependency between mental and language models, discussing how LLMs influence and are influenced by human cognition and societal context. We introduce three primary theories to conceptualize such influences: Recontextualization, Standardization, and Semantic Dementia, illustrating how LLMs drive, standardize, and potentially degrade language semantics. Our subsequent review categorizes methods for visualizing semantic change into frequency-based, embedding-based, and context-based techniques, being first in assessing their effectiveness in capturing linguistic evolution: Embedding-based methods are highlighted as crucial for a detailed semantic analysis, reflecting both broad trends and specific linguistic changes. We underscore the need for novel visual, interactive tools to monitor and explain semantic changes induced by LLMs, ensuring the preservation of linguistic diversity and mitigating linguistic biases. This work provides essential insights for future research on semantic change visualization and the dynamic nature of language evolution in the times of LLMs.","accessible_pdf":false,"authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"raphael.buchmueller@uni-konstanz.de","is_corresponding":true,"name":"Raphael Buchm\u00fcller"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"friederike.koerte@uni-konstanz.de","is_corresponding":false,"name":"Friederike K\u00f6rte"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Raphael Buchm\u00fcller"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"s-vds0","slot_id":"s-vds-1029","time_end":"","time_stamp":"","time_start":"","title":"Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs","uid":"s-vds-1029","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10078374":{"abstract":"Existing dynamic weighted graph visualization approaches rely on users\u2019 mental comparison to perceive temporal evolution of dynamic weighted graphs, hindering users from effectively analyzing changes across multiple timeslices. We propose DiffSeer, a novel approach for dynamic weighted graph visualization by explicitly visualizing the differences of graph structures (e.g., edge weight differences) between adjacent timeslices. Specifically, we present a novel nested matrix design that overviews the graph structure differences over a time period as well as shows graph structure details in the timeslices of user interest. By collectively considering the overall temporal evolution and structure details in each timeslice, an optimization-based node reordering strategy is developed to group nodes with similar evolution patterns and highlight interesting graph structure details in each timeslice. We conducted two case studies on real-world graph datasets and in-depth interviews with 12 target users to evaluate DiffSeer. The results demonstrate its effectiveness in visualizing dynamic weighted graphs.","accessible_pdf":false,"authors":[{"affiliations":"","email":"wenxiaolin@stu.scu.edu.cn","is_corresponding":false,"name":"Xiaolin Wen"},{"affiliations":"","email":"yongwang@smu.edu.sg","is_corresponding":true,"name":"Yong Wang"},{"affiliations":"","email":"wumeixuan@stu.scu.edu.cn","is_corresponding":false,"name":"Meixuan Wu"},{"affiliations":"","email":"wangfengjie@stu.scu.edu.cn","is_corresponding":false,"name":"Fengjie Wang"},{"affiliations":"","email":"xuanwu.yue@connect.ust.hk","is_corresponding":false,"name":"Xuanwu Yue"},{"affiliations":"","email":"shenqm@sustech.edu.cn","is_corresponding":false,"name":"Qiaomu Shen"},{"affiliations":"","email":"mayx@sustech.edu.cn","is_corresponding":false,"name":"Yuxin Ma"},{"affiliations":"","email":"zhumin@scu.edu.cn","is_corresponding":false,"name":"Min Zhu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yong Wang"],"doi":"10.1109/MCG.2023.3248289","external_paper_link":"","fno":"10078374","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visibility Graph, Spatial Patterns, Weight Change, In-depth Interviews, Temporal Changes, Temporal Evolution, Negative Changes, Interesting Patterns, Edge Weights, Real-world Datasets, Graph Structure, Visual Approach, Dynamic Visualization, Dynamic Graph, Financial Networks, Graph Datasets, Similar Evolutionary Patterns, User Interviews, Similar Changes, Chinese New Year, Sector Indices, Original Graph, Red Rectangle, Nodes In Order, Stock Market Crash, Stacked Bar Charts, Different Types Of Matrices, Chinese New, Blue Rectangle"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10078374","time_end":"","time_stamp":"","time_start":"","title":"DiffSeer: Difference-Based Dynamic Weighted Graph Visualization","uid":"v-cga-10078374","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10091124":{"abstract":"The Internet of Food (IoF) is an emerging field in smart foodsheds, involving the creation of a knowledge graph (KG) about the environment, agriculture, food, diet, and health. However, the heterogeneity and size of the KG present challenges for downstream tasks, such as information retrieval and interactive exploration. To address those challenges, we propose an interactive knowledge and learning environment (IKLE) that integrates three programming and modeling languages to support multiple downstream tasks in the analysis pipeline. To make IKLE easier to use, we have developed algorithms to automate the generation of each language. In addition, we collaborated with domain experts to design and develop a dataflow visualization system, which embeds the automatic language generations into components and allows users to build their analysis pipeline by dragging and connecting components of interest. We have demonstrated the effectiveness of IKLE through three real-world case studies in smart foodsheds.","accessible_pdf":false,"authors":[{"affiliations":"","email":"tu.253@osu.edu","is_corresponding":true,"name":"Yamei Tu"},{"affiliations":"","email":"wang.5502@osu.edu","is_corresponding":false,"name":"Xiaoqi Wang"},{"affiliations":"","email":"qiu.580@osu.edu","is_corresponding":false,"name":"Rui Qiu"},{"affiliations":"","email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"},{"affiliations":"","email":"mmmille6@wisc.edu","is_corresponding":false,"name":"Michelle Miller"},{"affiliations":"","email":"jinmeng.rao@wisc.edu","is_corresponding":false,"name":"Jinmeng Rao"},{"affiliations":"","email":"song.gao@wisc.edu","is_corresponding":false,"name":"Song Gao"},{"affiliations":"","email":"prhuber@ucdavis.edu","is_corresponding":false,"name":"Patrick R. Huber"},{"affiliations":"","email":"adhollander@ucdavis.edu","is_corresponding":false,"name":"Allan D. Hollander"},{"affiliations":"","email":"matthew@ic-foods.org","is_corresponding":false,"name":"Matthew Lange"},{"affiliations":"","email":"cgarcia@tacc.utexas.edu","is_corresponding":false,"name":"Christian R. Garcia"},{"affiliations":"","email":"jstubbs@tacc.utexas.edu","is_corresponding":false,"name":"Joe Stubbs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yamei Tu"],"doi":"10.1109/MCG.2023.3263960","external_paper_link":"","fno":"10091124","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Learning Environment, Interactive Learning Environments, Programming Language, Visual System, Analysis Pipeline, Patterns In Data, Flow Data, Human-computer Interaction, Food Systems, Information Retrieval, Domain Experts, Language Model, Automatic Generation, Interactive Exploration, Cyberinfrastructure, Pre-trained Language Models, Resource Description Framework, SPARQL Query, DBpedia, Entity Types, Data Visualization, Resilience Analysis, Load Data, Query Results, Supply Chain, Network Flow"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10091124","time_end":"","time_stamp":"","time_start":"","title":"An Interactive Knowledge and Learning Environment in Smart Foodsheds","uid":"v-cga-10091124","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10128890":{"abstract":"Some 15 years ago, Visualization Viewpoints published an influential article titled Rainbow Color Map (Still) Considered Harmful (Borland and Taylor, 2007). The paper argued that the \u201crainbow colormap\u2019s characteristics of confusing the viewer, obscuring the data and actively misleading interpretation make it a poor choice for visualization.\u201d Subsequent articles often repeat and extend these arguments, so much so that avoiding rainbow colormaps, along with their derivatives, has become dogma in the visualization community. Despite this loud and persistent recommendation, scientists continue to use rainbow colormaps. Have we failed to communicate our message, or do rainbow colormaps offer advantages that have not been fully appreciated? We argue that rainbow colormaps have properties that are underappreciated by existing design conventions. We explore key critiques of the rainbow in the context of recent research to understand where and how rainbows might be misunderstood. Choosing a colormap is a complex task, and rainbow colormaps can be useful for selected applications.","accessible_pdf":false,"authors":[{"affiliations":"","email":"cware@ccom.unh.edu","is_corresponding":false,"name":"Colin Ware"},{"affiliations":"","email":"mstone@acm.org","is_corresponding":true,"name":"Maureen Stone"},{"affiliations":"","email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maureen Stone"],"doi":"10.1109/MCG.2023.3246111","external_paper_link":"","fno":"10128890","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Image Color Analysis, Semantics, Data Visualization, Estimation, Reliability Engineering"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10128890","time_end":"","time_stamp":"","time_start":"","title":"Rainbow Colormaps Are Not All Bad","uid":"v-cga-10128890","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10198358":{"abstract":"Set visualization facilitates the exploration and analysis of set-type data. However, how sets should be visualized when the data are uncertain is still an open research challenge. To address the problem of depicting uncertainty in set visualization, we ask 1) which aspects of set type data can be affected by uncertainty and 2) which characteristics of uncertainty influence the visualization design. We answer these research questions by first describing a conceptual framework that brings together 1) the information that is primarily relevant in sets (i.e., set membership, set attributes, and element attributes) and 2) different plausible categories of (un)certainty (i.e., certainty, undefined uncertainty as a binary fact, and defined uncertainty as quantifiable measure). Following the structure of our framework, we systematically discuss basic visualization examples of integrating uncertainty in set visualizations. We draw on existing knowledge about general uncertainty visualization and previous evidence of its effectiveness.","accessible_pdf":false,"authors":[{"affiliations":"","email":"christian.tominski@uni-rostock.de","is_corresponding":false,"name":"Christian Tominski"},{"affiliations":"","email":"m.behrisch@uu.nl","is_corresponding":true,"name":"Michael Behrisch"},{"affiliations":"","email":"susanne.bleisch@fhnw.ch","is_corresponding":false,"name":"Susanne Bleisch"},{"affiliations":"","email":"sara.fabrikant@geo.uzh.ch","is_corresponding":false,"name":"Sara Irina Fabrikant"},{"affiliations":"","email":"eva.mayr@donau-uni.ac.at","is_corresponding":false,"name":"Eva Mayr"},{"affiliations":"","email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":"","email":"helen.purchase@monash.edu","is_corresponding":false,"name":"Helen Purchase"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Behrisch"],"doi":"10.1109/MCG.2023.3300441","external_paper_link":"","fno":"10198358","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Uncertainty, Data Visualization, Measurement Uncertainty, Visual Analytics, Terminology, Task Analysis, Surveys, Conceptual Framework, Cardinality, Data Visualization, Visual Representation, Measure Of The Amount, Set Membership, Intersection Set, Visual Design, Different Types Of Uncertainty, Missing Values, Visual Methods, Fuzzy Set, Age Of Students, Color Values, Uncertainty Values, Explicit Representation, Aggregate Value, Exact Information, Uncertain Information, Table Cells, Temporal Uncertainty, Uncertain Data, Representation Of Uncertainty, Implicit Representation, Spatial Uncertainty, Point Symbol, Visual Clutter, Color Hue, Graphical Elements, Uncertain Value"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10198358","time_end":"","time_stamp":"","time_start":"","title":"Visualizing Uncertainty in Sets","uid":"v-cga-10198358","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10201383":{"abstract":"Although visualizations are a useful tool for helping people to understand information, they can also have unintended effects on human cognition. This is especially true for uncertain information, which is difficult for people to understand. Prior work has found that different methods of visualizing uncertain information can produce different patterns of decision making from users. However, uncertainty can also be represented via text or numerical information, and few studies have systematically compared these types of representations to visualizations of uncertainty. We present two experiments that compared visual representations of risk (icon arrays) to numerical representations (natural frequencies) in a wildfire evacuation task. Like prior studies, we found that different types of visual cues led to different patterns of decision making. In addition, our comparison of visual and numerical representations of risk found that people were more likely to evacuate when they saw visualizations than when they saw numerical representations. These experiments reinforce the idea that design choices are not neutral: seemingly minor differences in how information is represented can have important impacts on human risk perception and decision making.","accessible_pdf":false,"authors":[{"affiliations":"","email":"lematze@sandia.gov","is_corresponding":true,"name":"Laura E. Matzen"},{"affiliations":"","email":"bchowel@sandia.gov","is_corresponding":false,"name":"Breannan C. Howell"},{"affiliations":"","email":"mctrumb@sandia.gov","is_corresponding":false,"name":"Michael C. S. Trumbo"},{"affiliations":"","email":"kmdivis@sandia.gov","is_corresponding":false,"name":"Kristin M. Divis"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Laura E. Matzen"],"doi":"10.1109/MCG.2023.3299875","external_paper_link":"","fno":"10201383","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, Uncertainty, Decision Making, Costs, Task Analysis, Laboratories, Information Analysis, Decision Making, Visual Representation, Numerical Representation, Decision Patterns, Deterministic, Risk Perception, Specific Information, Fundamental Frequency, Point Values, Representation Of Information, Risk Information, Visual Conditions, Numerous Conditions, Human Decision, Numerical Information, Impact Of Different Types, Uncertain Information, Type Of Visualization, Differences In Risk Perception, Representation Of Uncertainty, Increase In Participation, Participants In Experiment, Individual Difference Measures, Sandia National Laboratories, Risk Propensity, Bonus Payments, Average Response Time, Difference In Probability, Response Time"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10201383","time_end":"","time_stamp":"","time_start":"","title":"Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making","uid":"v-cga-10201383","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10207831":{"abstract":"The membership function is to categorize quantities along with a confidence degree. This article investigates a generic user interaction based on this function for categorizing various types of quantities without modification, which empowers users to articulate uncertainty categorization and enhance their visual data analysis significantly. We present the technique design and an online prototype, supplementing with insights from three case studies that highlight the technique\u2019s efficacy among different types of quantities. Furthermore, we conduct a formal user study to scrutinize the process and reasoning users employ while utilizing our technique. The findings indicate that our technique can help users create customized categories. Both our code and the interactive prototype are made available as open-source resources, intended for application across varied domains as a generic tool.","accessible_pdf":false,"authors":[{"affiliations":"","email":"liuliqun.cs@gmail.com","is_corresponding":true,"name":"Liqun Liu"},{"affiliations":"","email":"romain.vuillemot@ec-lyon.fr","is_corresponding":false,"name":"Romain Vuillemot"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Liqun Liu"],"doi":"10.1109/MCG.2023.3301449","external_paper_link":"","fno":"10207831","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Uncertainty, Prototypes, Fuzzy Logic, Image Color Analysis, Fuzzy Sets, Open Source Software, General Function, Membership Function, User Study, Classification Process, Fuzzy Logic, Quantitative Values, Visualization Techniques, Amount Of Type, Fuzzy Theory, General Interaction, Temperature Dataset, Interaction Techniques, Carbon Dioxide, Computation Time, Rule Based, Web Page, Real World Scenarios, Fuzzy Set, Domain Experts, Supercritical CO 2, Parallel Coordinates, Fuzzy System, Fuzzy Clustering, Interactive Visualization, Amount Of Items, Large Scale Problems"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10207831","time_end":"","time_stamp":"","time_start":"","title":"A Generic Interactive Membership Function for Categorization of Quantities","uid":"v-cga-10207831","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10227838":{"abstract":"We report a study investigating the viability of using interactive visualizations to aid architectural design with building codes. While visualizations have been used to support general architectural design exploration, existing computational solutions treat building codes as separate from, rather than part of, the design process, creating challenges for architects. Through a series of participatory design studies with professional architects, we found that interactive visualizations have promising potential to aid design exploration and sensemaking in early stages of architectural design by providing feedback about potential allowances and consequences of design decisions. However, implementing a visualization system necessitates addressing the complexity and ambiguity inherent in building codes. To tackle these challenges, we propose various user-driven knowledge management mechanisms for integrating, negotiating, interpreting, and documenting building code rules.","accessible_pdf":false,"authors":[{"affiliations":"","email":"snowak@sfu.ca","is_corresponding":true,"name":"Stan Nowak"},{"affiliations":"","email":"bon.aseniero@autodesk.com","is_corresponding":false,"name":"Bon Adriel Aseniero"},{"affiliations":"","email":"lyn@sfu.ca","is_corresponding":false,"name":"Lyn Bartram"},{"affiliations":"","email":"tovi@dgp.toronto.edu","is_corresponding":false,"name":"Tovi Grossman"},{"affiliations":"","email":"George.fitzmaurice@autodesk.com","is_corresponding":false,"name":"George Fitzmaurice"},{"affiliations":"","email":"justin.matejka@autodesk.com","is_corresponding":false,"name":"Justin Matejka"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Stan Nowak"],"doi":"10.1109/MCG.2023.3307971","external_paper_link":"","fno":"10227838","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10227838","time_end":"","time_stamp":"","time_start":"","title":"Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes","uid":"v-cga-10227838","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10414267":{"abstract":"Traditional approaches to data visualization have often focused on comparing different subsets of data, and this is reflected in the many techniques developed and evaluated over the years for visual comparison. Similarly, common workflows for exploratory visualization are built upon the idea of users interactively applying various filter and grouping mechanisms in search of new insights. This paradigm has proven effective at helping users identify correlations between variables that can inform thinking and decision-making. However, recent studies show that consumers of visualizations often draw causal conclusions even when not supported by the data. Motivated by these observations, this article highlights recent advances from a growing community of researchers exploring methods that aim to directly support visual causal inference. However, many of these approaches have their own limitations, which limit their use in many real-world scenarios. This article, therefore, also outlines a set of key open challenges and corresponding priorities for new research to advance the state of the art in visual causal inference.","accessible_pdf":false,"authors":[{"affiliations":"","email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":"","email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":"","email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arran Zeyu Wan"],"doi":"10.1109/MCG.2023.3338788","external_paper_link":"","fno":"10414267","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Analytical Models, Correlation, Visual Analytics, Decision Making, Data Visualization, Reliability Theory, Cognition, Inference Algorithms, Causal Inference, Causality, Social Media, Exploratory Analysis, Data Visualization, Visual Representation, Visual Analysis, Visualization Tool, Open Challenges, Interactive Visualization, Assembly Line, Different Subsets Of Data, Visual Analytics Tool, Data Driven Decision Making, Data Quality, Statistical Models, Causal Effect, Visual System, Use Of Social Media, Bar Charts, Causal Model, Causal Graph, Chart Types, Directed Acyclic Graph, Visual Design, Portion Of The Dataset, Causal Structure, Prior Section, Causal Explanations, Line Graph"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10414267","time_end":"","time_stamp":"","time_start":"","title":"Using Counterfactuals to Improve Causal Inferences From Visualizations","uid":"v-cga-10414267","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-10478355":{"abstract":"Recent developments in artificial intelligence (AI) and machine learning (ML) have led to the creation of powerful generative AI methods and tools capable of producing text, code, images, and other media in response to user prompts. Significant interest in the technology has led to speculation about what fields, including visualization, can be augmented or replaced by such approaches. However, there remains a lack of understanding about which visualization activities may be particularly suitable for the application of generative AI. Drawing on examples from the field, we map current and emerging capabilities of generative AI across the different phases of the visualization lifecycle and describe salient opportunities and challenges.","accessible_pdf":false,"authors":[{"affiliations":"","email":"rahul.basole@accenture.com","is_corresponding":false,"name":"Rahul C. Basole"},{"affiliations":"","email":"timothy.major@accenture.com","is_corresponding":true,"name":"Timothy Major"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Timothy Major"],"doi":"10.1109/MCG.2024.3362168","external_paper_link":"","fno":"10478355","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Generative AI, Art, Artificial Intelligence, Machine Learning, Visualization, Media, Augmented Reality, Machine Learning, Visual Representation, Professional Knowledge, Creative Process, Domain Experts, Generalization Capability, Development Of Artificial Intelligence, Artificial Intelligence Capabilities, Iterative Process, Natural Language, Commercial Software, Hallucinations, Team Sports, Design Requirements, Intelligence Agencies, Recommender Systems, User Requirements, Iterative Design, Use Of Artificial Intelligence, Visual Design, Phase Assemblage, Data Literacy"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10478355","time_end":"","time_stamp":"","time_start":"","title":"Generative AI for Visualization: Opportunities and Challenges","uid":"v-cga-10478355","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-9612019":{"abstract":"The number of online news articles available nowadays is rapidly increasing. When exploring articles on online news portals, navigation is mostly limited to the most recent ones. The spatial context and the history of topics are not immediately accessible. To support readers in the exploration or research of articles in large datasets, we developed an interactive 3D globe visualization. We worked with datasets from multiple online news portals containing up to 45,000 articles. Using agglomerative hierarchical clustering, we represent the referenced locations of news articles on a globe with different levels of detail. We employ two interaction schemes for navigating the viewpoint on the visualization, including support for hand-held devices and desktop PCs, and provide search functionality and interactive filtering. Based on this framework, we explore additional modules for jointly exploring the spatial and temporal domain of the dataset and incorporating live news into the visualization.","accessible_pdf":false,"authors":[{"affiliations":"","email":"nicholas.ingulfsen@gmail.com","is_corresponding":false,"name":"Nicholas Ingulfsen"},{"affiliations":"","email":"simone.schaub@visinf.tu-darmstadt.de","is_corresponding":false,"name":"Simone Schaub-Meyer"},{"affiliations":"","email":"grossm@inf.ethz.ch","is_corresponding":false,"name":"Markus Gross"},{"affiliations":"","email":"tobias.guenther@fau.de","is_corresponding":true,"name":"Tobias G\u00fcnther"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias G\u00fcnther"],"doi":"10.1109/MCG.2021.3127434","external_paper_link":"","fno":"9612019","has_image":false,"has_pdf":false,"image_caption":"","keywords":["News Articles, Number Of Articles, Headlines, Interactive Visualization, Online News, Agglomerative Clustering, Local News, Interactive Exploration, Desktop PC, Different Levels Of Detail, News Portals, Spatial Information, User Study, 3D Space, Human-computer Interaction, Temporal Information, Third Dimension, Tablet Computer, Pie Chart, News Stories, 3D Visualization, Article Details, Visual Point, Bottom Of The Screen, Geospatial Data, Type Of Visualization, Largest Dataset, Tagging Location, Live Feed"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-9612019","time_end":"","time_stamp":"","time_start":"","title":"News Globe: Visualization of Geolocalized News Articles","uid":"v-cga-9612019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-9745375":{"abstract":"We consider the general problem known as job shop scheduling, in which multiple jobs consist of sequential operations that need to be executed or served by appropriate machines having limited capacities. For example, train journeys (jobs) consist of moves and stops (operations) to be served by rail tracks and stations (machines). A schedule is an assignment of the job operations to machines and times where and when they will be executed. The developers of computational methods for job scheduling need tools enabling them to explore how their methods work. At a high level of generality, we define the system of pertinent exploration tasks and a combination of visualizations capable of supporting the tasks. We provide general descriptions of the purposes, contents, visual encoding, properties, and interactive facilities of the visualizations and illustrate them with images from an example implementation in air traffic management. We justify the design of the visualizations based on the tasks, principles of creating visualizations for pattern discovery, and scalability requirements. The outcomes of our research are sufficiently general to be of use in a variety of applications.","accessible_pdf":false,"authors":[{"affiliations":"","email":"gennady.andrienko@iais.fraunhofer.de","is_corresponding":true,"name":"Gennady Andrienko"},{"affiliations":"","email":"natalia.andrienko@iais.fraunhofer.de","is_corresponding":false,"name":"Natalia Andrienko"},{"affiliations":"","email":"jmcordero@e-crida.enaire.es","is_corresponding":false,"name":"Jose Manuel Cordero Garcia"},{"affiliations":"","email":"dirk.hecker@iais.fraunhofer.de","is_corresponding":false,"name":"Dirk Hecker"},{"affiliations":"","email":"georgev@unipi.gr","is_corresponding":false,"name":"George A. Vouros"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gennady Andrienko"],"doi":"10.1109/MCG.2022.3163437","external_paper_link":"","fno":"9745375","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, Schedules, Task Analysis, Optimization, Job Shop Scheduling, Data Analysis, Processor Scheduling, Iterative Methods"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-9745375","time_end":"","time_stamp":"","time_start":"","title":"Supporting Visual Exploration of Iterative Job Scheduling","uid":"v-cga-9745375","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-cga-9866547":{"abstract":"In many applications, developed deep-learning models need to be iteratively debugged and refined to improve the model efficiency over time. Debugging some models, such as temporal multilabel classification (TMLC) where each data point can simultaneously belong to multiple classes, can be especially more challenging due to the complexity of the analysis and instances that need to be reviewed. In this article, focusing on video activity recognition as an application of TMLC, we propose DETOXER, an interactive visual debugging system to support finding different error types and scopes through providing multiscope explanations.","accessible_pdf":false,"authors":[{"affiliations":"","email":"m.nourani@northeastern.edu","is_corresponding":true,"name":"Mahsan Nourani"},{"affiliations":"","email":"chiradeep.roy@utdallas.edu","is_corresponding":false,"name":"Chiradeep Roy"},{"affiliations":"","email":"dhoneycutt@ufl.edu","is_corresponding":false,"name":"Donald R. Honeycutt"},{"affiliations":"","email":"eragan@ufl.edu","is_corresponding":false,"name":"Eric D. Ragan"},{"affiliations":"","email":"vibhav.gogate@utdallas.edu","is_corresponding":false,"name":"Vibhav Gogate"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mahsan Nourani"],"doi":"10.1109/MCG.2022.3201465","external_paper_link":"","fno":"9866547","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Debugging, Analytical Models, Heating Systems, Data Models, Computational Modeling, Activity Recognition, Deep Learning, Multi Label Classification, Visualization Tool, Temporal Classification, Visual Debugging, False Positive, False Negative, Active Components, Deep Learning Models, Types Of Errors, Video Frames, Error Detection, Detection Of Types, Action Recognition, Interactive Visualization, Sequence Of Points, Design Goals, Positive Errors, Critical Outcomes, Error Patterns, Global Panel, False Negative Rate, False Positive Rate, Heatmap, Visual Approach, Truth Labels, True Positive, Confidence Score, Anomaly Detection, Interface Elements"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-9866547","time_end":"","time_stamp":"","time_start":"","title":"DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification","uid":"v-cga-9866547","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1026":{"abstract":"We present a visual analytics approach for multi-level visual exploration of users\u2019 interaction strategies in an interactive digital environment. The use of interactive touchscreen exhibits in informal learning environments, such as museums and science centers, often incorporate frameworks that classify learning processes, such as Bloom\u2019s taxonomy, to achieve better user engagement and knowledge transfer. To analyze user behavior within these digital environments, interaction logs are recorded to capture diverse exploration strategies. However, analysis of such logs is challenging, especially in terms of coupling interactions and cognitive learning processes, and existing work within learning and educational contexts remains limited. To address these gaps, we develop a visual analytics approach for analyzing interaction logs that supports exploration at the individual user level and multi-user comparison. The approach utilizes algorithmic methods to identify similarities in users' interactions and reveal their exploration strategies. We motivate and illustrate our approach through an application scenario, using event sequences derived from interaction log data in an experimental study conducted with science center visitors from diverse backgrounds and demographics. The study involves 14 users completing tasks of increasing complexity, designed to stimulate different levels of cognitive learning processes. We implement our approach in an interactive visual analytics prototype system, named VISID, and together with domain experts, discover a set of task-solving exploration strategies, such as \"cascading\" and \"nested-loop\", which reflect different levels of learning processes from Bloom's taxonomy. Finally, we discuss the generalizability and scalability of the presented system and the need for further research with data acquired in the wild.","accessible_pdf":false,"authors":[{"affiliations":["Media and Information Technology, Norrk\u00f6ping, Sweden"],"email":"peilin.yu@liu.se","is_corresponding":true,"name":"Peilin Yu"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"aida.vitoria@liu.se","is_corresponding":false,"name":"Aida Nordman"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"marta.koc-januchta@liu.se","is_corresponding":false,"name":"Marta M. Koc-Januchta"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":false,"name":"Lonni Besan\u00e7on"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"katerina.vrotsou@liu.se","is_corresponding":false,"name":"Katerina Vrotsou"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Peilin Yu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1026","time_end":"","time_stamp":"","time_start":"","title":"Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment","uid":"v-full-1026","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1031":{"abstract":"In soccer, player scouting aims to find players suitable for a team to increase the winning chance in future matches. To scout suitable players, coaches and analysts need to consider various complicated factors, such as the players' performance in the tactics of a new team, which is hard to learn directly from their historical performance. Match simulation methods have been introduced to scout players by estimating their expected contributions to a new team. However, they usually focus on the simulation of match results and hardly support interactive analysis to navigate potential target players and compare them in fine-grained simulated behaviors. In this work, we propose a visual analytics method to assist soccer player scouting based on match simulation. We construct a two-level match simulation framework for estimating both match results and player behaviors when a player comes to a new team. Based on the framework, we develop a visual analytics system, Team-Scouter, to facilitate the simulative-based soccer player scouting process through player navigation, comparison, and explanation. With our system, coaches and analysts can find potential players suitable for the team and compare them on historical and expected performances. To explain the players' expected performances, the system provides a visual comparison between the simulated behaviors of the player and the actual ones. The usefulness and effectiveness of the system are demonstrated by two case studies on a real-world dataset and an expert interview.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"caoanqi28@163.com","is_corresponding":true,"name":"Anqi Cao"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"xxie@zju.edu.cn","is_corresponding":false,"name":"Xiao Xie"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"2366385033@qq.com","is_corresponding":false,"name":"Runjin Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"1282533692@qq.com","is_corresponding":false,"name":"Yuxin Tian"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"fanmu_032@zju.edu.cn","is_corresponding":false,"name":"Mu Fan"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhang_hui@zju.edu.cn","is_corresponding":false,"name":"Hui Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anqi Cao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1031","time_end":"","time_stamp":"","time_start":"","title":"Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting","uid":"v-full-1031","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1032":{"abstract":"Dynamic topic modeling is useful at discovering the development and change in latent topics over time. However, present methodology relies on algorithms that separate document and word representations. This prevents the creation of a meaningful embedding space where changes in word usage and documents can be directly analyzed in a temporal context. This paper proposes an expansion of the compass-aligned temporal Word2Vec methodology into dynamic topic modeling. Such a method allows for the direct comparison of word and document embeddings across time in dynamic topics. This enables the creation of visualizations that incorporate diachronic word embeddings within the context of documents into topic visualizations. In experiments against the current state-of-the-art, our proposed method demonstrates overall competitive performance in topic relevancy and diversity across temporal datasets of varying size. Simultaneously, it provides insightful visualizations focused on temporal word embeddings while maintaining the insights provided by global topic evolution, advancing our understanding of how topics evolve over time.","accessible_pdf":false,"authors":[{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"d4n1elp@vt.edu","is_corresponding":true,"name":"Daniel Palamarchuk"},{"affiliations":["Virginia Polytechnic Institute of Technology , Blacksburg, United States"],"email":"lemaraw@vt.edu","is_corresponding":false,"name":"Lemara Williams"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"bmayer@cs.vt.edu","is_corresponding":false,"name":"Brian Mayer"},{"affiliations":["Savannah River National Laboratory, Aiken, United States"],"email":"thomas.danielson@srnl.doe.gov","is_corresponding":false,"name":"Thomas Danielson"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":false,"name":"Rebecca Faust"},{"affiliations":["Savannah River National Laboratory, Aiken, United States"],"email":"larry.deschaine@srnl.doe.gov","is_corresponding":false,"name":"Larry M Deschaine PhD"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Palamarchuk"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1032","time_end":"","time_stamp":"","time_start":"","title":"Visualizing Temporal Topic Embeddings with a Compass","uid":"v-full-1032","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1039":{"abstract":"Propagation analysis refers to studying how information spreads on social media, a pivotal endeavor for understanding social sentiment and public opinions. Numerous studies contribute to visualizing information spread, but few have considered the implicit and complex diffusion patterns among multiple platforms. To bridge the gap, we collaborated with professionals to discover crucial factors that dissect the mechanism of cross-platform information spread. Based on that, we propose an information diffusion model that estimates the likelihood of a topic/post spreading among different social media platforms. Moreover, we propose a novel visual metaphor that encapsulates cross-platform patterns in a manner analogous to the spread of seeds across gardens. Specifically, we visualize social platforms, posts, implicit cross-platform routes, and salient instances as elements of a virtual ecosystem \u2014 gardens, flowers, winds, and seeds, respectively. We further develop a visual analytic system, namely BloomWind, that enables users to quickly identify the cross-platform diffusion patterns and investigate the relevant social media posts. Ultimately, we demonstrate the usage of BloomWind through two case studies and validate its effectiveness using expert interviews.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"940662579@qq.com","is_corresponding":true,"name":"Jianing Yin"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"hzjia@zju.edu.cn","is_corresponding":false,"name":"Hanze Jia"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhoubuwei@zju.edu.cn","is_corresponding":false,"name":"Buwei Zhou"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"tangtan@zju.edu.cn","is_corresponding":false,"name":"Tan Tang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"yingluu@zju.edu.cn","is_corresponding":false,"name":"Lu Ying"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"sn_ye@zju.edu.cn","is_corresponding":false,"name":"Shuainan Ye"},{"affiliations":["Michigan State University, East Lansing, United States"],"email":"pengtaiq@msu.edu","is_corresponding":false,"name":"Tai-Quan Peng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jianing Yin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1039","time_end":"","time_stamp":"","time_start":"","title":"Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts","uid":"v-full-1039","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1059":{"abstract":"When treating Head and Neck cancer patients, oncologists have to navigate a complicated series of treatment decisions for each patient. The relationship between each treatment decision and the potential tradeoff of tumor control and toxicity risk is poorly understood, leaving oncologists to largely rely on institutional knowledge and general guidelines that do not take into account specific patient circumstances. Evaluating these risks relies on a complicated understanding of several different factors such as patient health, spatial tumor spread and treatment side effect risk that can not be captured through simple heuristics. To support clinicians in better understanding tradeoffs when deciding on treatment courses, we developed DITTO, a digital-twin and visual computing system that allows clinicians to analyze nuanced patient risk for each patient and decide on an optimal treatment plan. DITTO relies on a sequential Deep Reinforcement Learning (DRL) system to deliver personalized risk of both long-term and short-term disease outcome and toxicity risk for HNC patients. Based on a participatory collaborative design alongside oncologists, we also implement several explainability methods to support clinical trust and encourage healthy skepticism when using our models. We evaluate the efficacy of our model through quantitative evaluation of model performance and case studies with qualitative feedback. Finally, we discuss design lessons for developing clinical visual XAI applications for clinical end users.","accessible_pdf":false,"authors":[{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"awentze2@uic.edu","is_corresponding":true,"name":"Andrew Wentzel"},{"affiliations":["University of Houston, Houston, United States"],"email":"skattia@mdanderson.org","is_corresponding":false,"name":"Serageldin Attia"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"zhangz@uic.edu","is_corresponding":false,"name":"Xinhua Zhang"},{"affiliations":["University of Iowa, Iowa City, United States"],"email":"guadalupe-canahuate@uiowa.edu","is_corresponding":false,"name":"Guadalupe Canahuate"},{"affiliations":["University of Texas, Houston, United States"],"email":"cdfuller@mdanderson.org","is_corresponding":false,"name":"Clifton David Fuller"},{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"g.elisabeta.marai@gmail.com","is_corresponding":false,"name":"G. Elisabeta Marai"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Andrew Wentzel"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1059","time_end":"","time_stamp":"","time_start":"","title":"DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer","uid":"v-full-1059","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1060":{"abstract":"There is increased interest in understanding the interplay between text and visuals in the field of data visualization. However, this attention has predominantly been on the use of text in standalone visualizations (such as text annotation overlays) or augmenting text stories supported by a series of independent views. In this paper, we shift from the traditional focus on single-chart annotations to characterize the nuanced but crucial communication role of text in the complex environment of interactive dashboards. Through a survey and analysis of 190 dashboards in the wild, plus 13 expert interview sessions with experienced dashboard authors, we highlight the distinctive nature of text as an integral component of the dashboard experience, while delving into the categories, semantic levels, and functional roles of text, and exploring how these text elements are coalesced by dashboard authors to guide and inform dashboard users. Our contributions are threefold. First, we distill qualitative and quantitative findings from our studies to characterize current practices of text use in dashboards, including a categorization of text-based components and design patterns. Second, we leverage current practices and existing literature to propose, discuss, and validate recommended practices for text in dashboards, embodied as a set of 12 heuristics that underscore the semantic and functional role of text in offering navigational cues, contextualizing data insights, supporting reading order, among other concerns. Third, we reflect on our findings plus existing literature to identify gaps and propose opportunities for data visualization researchers to push the boundaries on text usage for dashboards, from authoring support and interactivity to text generation and content personalization. Our research underscores the significance of elevating text as a first-class citizen in data visualization, and the need to support the inclusion of textual components and their interactive affordances in dashboard design.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"nicole.sultanum@gmail.com","is_corresponding":true,"name":"Nicole Sultanum"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicole Sultanum"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1060","time_end":"","time_stamp":"","time_start":"","title":"From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards","uid":"v-full-1060","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1063":{"abstract":"While previous work has found success in deploying visualizations as museum exhibits, it has not investigated whether museum context impacts visitor behaviour with these exhibits. We present an interactive Deep-time Literacy Visualization Exhibit (DeLVE) to help museum visitors understand deep time (lengths of extremely long geological processes) by improving proportional reasoning skills through comparison of different time periods. DeLVE uses a new visualization idiom, Connected Multi-Tier Ranges, to visualize curated datasets of past events across multiple scales of time, relating extreme scales with concrete scales that have more familiar magnitudes and units. Museum staff at three separate museums approved the deployment of DeLVE as a digital kiosk, and devoted time to curating a unique dataset in each of them. We collect data from two sources, an observational study and system trace logs. We discuss the importance of context: similar museum exhibits in different contexts were received very differently by visitors. We additionally discuss differences in our process from Sedlmair et al.'s design study methodology which is focused on design studies triggered by connection with collaborators rather than the discovery of a concept to communicate. Supplemental materials are available at: https://osf.io/z53dq/?view_only=4df33aad207144aca149982412125541","accessible_pdf":false,"authors":[{"affiliations":["The University of British Columbia, Vancouver, Canada"],"email":"marasolen@gmail.com","is_corresponding":true,"name":"Mara Solen"},{"affiliations":["University of British Columbia , Vancouver, Canada"],"email":"sultananigar70@gmail.com","is_corresponding":false,"name":"Nigar Sultana"},{"affiliations":["University of British Columbia, Vancouver, Canada"],"email":"laura.lukes@ubc.ca","is_corresponding":false,"name":"Laura A. Lukes"},{"affiliations":["University of British Columbia, Vancouver, Canada"],"email":"tmm@cs.ubc.ca","is_corresponding":false,"name":"Tamara Munzner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mara Solen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1063","time_end":"","time_stamp":"","time_start":"","title":"DeLVE into Earth\u2019s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts","uid":"v-full-1063","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1067":{"abstract":"Large Language Models (LLMs), such as ChatGPT and Llama, have revolutionized various domains through their impressive natural language processing capabilities. However, their deployment raises significant ethical and security concerns, including their potential misuse for generating fake news or aiding illegal activities. Thus, ensuring the development of secure and trustworthy LLMs is crucial. Traditional red teaming approaches for identifying vulnerabilities in AI models are limited by their reliance on manual prompt construction and expertise. This paper introduces a novel visual analytics system, AdversaFlow, designed to enhance the security of LLMs against adversarial attacks through human-AI collaboration. Our system, which involves adversarial training between a target model and a red model, is equipped with a unique multi-level adversarial flow visualization and a fluctuation path visualization technique. These features provide a detailed insight into the adversarial dynamics and the robustness of LLMs, thereby enabling AI security experts to identify and mitigate vulnerabilities effectively. We deliver quantitative evaluations for the models and present case studies that validate the utility of our system and share insights for future AI security solutions. Our contributions include a human-AI collaboration framework for LLM red teaming, a comprehensive visual analytics system to support adversarial pattern presentation and fluctuation analysis, and valuable lessons learned in visual analytics for AI security.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Ningbo, China"],"email":"dengdazhen@outlook.com","is_corresponding":true,"name":"Dazhen Deng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhangchuhan024@163.com","is_corresponding":false,"name":"Chuhan Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"huawzheng@gmail.com","is_corresponding":false,"name":"Huawei Zheng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"yw.pu@zju.edu.cn","is_corresponding":false,"name":"Yuwen Pu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"sji@zju.edu.cn","is_corresponding":false,"name":"Shouling Ji"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dazhen Deng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1067","time_end":"","time_stamp":"","time_start":"","title":"AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow","uid":"v-full-1067","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1077":{"abstract":"A growing body of work draws on feminist thinking to challenge assumptions about how people engage with and use visualizations. This work draws on feminist values, driving design and research guidelines that account for the influences of power and neglect. This prior work is largely prescriptive, however, forgoing articulation of how feminist theories of knowledge \u2014 or feminist epistemology \u2014 can alter research design and outcomes. At the core of our work is an engagement with feminist epistemology, drawing attention to how a new framework for how we know what we know enabled us to overcome intellectual tensions in our research. Specifically, we focus on the theoretical concept of entanglement, central to recent feminist scholarship, and contribute: a history of entanglement in the broader scope of feminist theory; an articulation of the main points of entanglement theory for a visualization context; and a case study of research outcomes as evidence of the potential of feminist epistemology to impact visualization research. This work answers a call in the community to embrace a broader set of theoretical and epistemic foundations and provides a starting point for bringing different theories into visualization research.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"derya.akbaba@liu.se","is_corresponding":true,"name":"Derya Akbaba"},{"affiliations":["Emory University, Atlanta, United States"],"email":"lauren.klein@emory.edu","is_corresponding":false,"name":"Lauren Klein"},{"affiliations":["Link\u00f6ping University, N\u00f6rrkoping, Sweden"],"email":"miriah.meyer@liu.se","is_corresponding":false,"name":"Miriah Meyer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Derya Akbaba"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1077","time_end":"","time_stamp":"","time_start":"","title":"Entanglements for Visualization: Changing Research Outcomes through Feminist Theory","uid":"v-full-1077","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1096":{"abstract":"Large Language Models (LLMs) have shown great potential in intelligent visualization systems, especially for domain-specific applications. Integrating LLMs into visualization systems presents challenges, and we categorize these challenges into three alignments: domain problems with LLMs, visualization with LLMs, and interaction with LLMs. To achieve these alignments, we propose a framework and outline a workflow to guide the application of fine-tuned LLMs to enhance visual interactions for domain-specific tasks. These alignment challenges are critical in education as they call for an intelligent visualization system to support beginners' self-regulated learning. Therefore, we apply the framework to education and introduce Tailor-Mind, an interactive visualization system designed to facilitate self-regulated learning for artificial intelligence beginners. Drawing on insights from a preliminary study, we identify self-regulated learning tasks and fine-tuning objectives to guide visualization design and tuning data construction. Our focus on aligning visualization with fine-tuned LLM makes Tailor-Mind more like a personalized tutor. Tailor-Mind also supports interactive recommendations to help beginners better achieve their learning goals. Model performance evaluations and user studies confirm that Tailor-Mind improves the self-regulated learning experience, effectively validating the proposed framework.","accessible_pdf":false,"authors":[{"affiliations":["Fudan University, Shanghai, China"],"email":"lgao.lynne@gmail.com","is_corresponding":true,"name":"Lin Gao"},{"affiliations":["Fudan University, ShangHai, China"],"email":"kingluther6666@gmail.com","is_corresponding":false,"name":"Jing Lu"},{"affiliations":["Fudan University, Shanghai, China"],"email":"gemini25szk@gmail.com","is_corresponding":false,"name":"Zekai Shao"},{"affiliations":["Fudan University, Shanghai, China"],"email":"ziyuelin917@gmail.com","is_corresponding":false,"name":"Ziyue Lin"},{"affiliations":["Fudan unversity, ShangHai, China"],"email":"sbyue23@m.fudan.edu.cn","is_corresponding":false,"name":"Shengbin Yue"},{"affiliations":["Fudan University, Shanghai, China"],"email":"chiokit0819@gmail.com","is_corresponding":false,"name":"Chiokit Ieong"},{"affiliations":["Fudan University, Shanghai, China"],"email":"21307130094@m.fudan.edu.cn","is_corresponding":false,"name":"Yi Sun"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"rory.james.zauner@univie.ac.at","is_corresponding":false,"name":"Rory Zauner"},{"affiliations":["Fudan University, Shanghai, China"],"email":"zywei@fudan.edu.cn","is_corresponding":false,"name":"Zhongyu Wei"},{"affiliations":["Fudan University, Shanghai, China"],"email":"simingchen3@gmail.com","is_corresponding":false,"name":"Siming Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lin Gao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1096","time_end":"","time_stamp":"","time_start":"","title":"Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education","uid":"v-full-1096","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1099":{"abstract":"Tactics play an important role in team sports by guiding how players interact on the field. Both sports fans and experts have a demand for analyzing sports tactics. Existing approaches allow users to visually perceive the multivariate tactical effects. However, these approaches usually consider each tactic as a whole, making it difficult for users to connect the complex interactions inside each tactic to the final tactical effect. In this work, we collaborate with basketball experts and propose a progressive approach to help users gain a deeper understanding of how each tactic works and customize tactics on demand. Users can progressively sketch on a tactic board, and a coach agent will simulate the possible actions in each step and present the simulation to users with facet visualizations. We develop an extensible framework that integrates large language models (LLMs) and visualizations to help users communicate with the coach agent with multimodal inputs. Based on the framework, we design and develop Smartboard, an agent-based interactive visualization system for fine-grained tactical analysis. Smartboard provides users with a structured process of setup, simulation, and evolution, allowing for iterative exploration of tactics based on specific personalized scenarios. We conduct case studies based on real-world basketball datasets to demonstrate the usefulness of our system.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ziao_liu@outlook.com","is_corresponding":true,"name":"Ziao Liu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"xxie@zju.edu.cn","is_corresponding":false,"name":"Xiao Xie"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"3170101799@zju.edu.cn","is_corresponding":false,"name":"Moqi He"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhao_ws@zju.edu.cn","is_corresponding":false,"name":"Wenshuo Zhao"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"wuyihong0606@gmail.com","is_corresponding":false,"name":"Yihong Wu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"lycheecheng@zju.edu.cn","is_corresponding":false,"name":"Liqi Cheng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhang_hui@zju.edu.cn","is_corresponding":false,"name":"Hui Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ziao Liu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1099","time_end":"","time_stamp":"","time_start":"","title":"Smartboard: Visual Exploration of Team Tactics with LLM Agent","uid":"v-full-1099","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1100":{"abstract":"\u201cCorrelation does not imply causation\u201d is a famous mantra in statistical and visual analysis. However, consumers of visualizations often draw causal conclusions when only correlations between variables are shown. In this paper, we investigate factors that contribute to causal relationships users perceive in visualizations. We collected a corpus of concept pairs from variables in widely used datasets and created visualizations that depict varying correlative associations using three typical statistical chart types. We conducted two MTurk studies on (1) preconceived notions on causal relations without charts, and (2) perceived causal relations with charts, for each concept pair. Our results indicate that people make assumptions about causal relationships between pairs of concepts even without seeing any visualized data. Moreover, our results suggest that these assumptions constitute causal priors that, in combination with chart type and visualized association, impact how data visualizations are interpreted. The results also suggest that causal priors may lead to over- or under-estimation in perceived causal relations in different circumstances, and that those priors can also impact users\u2019 confidence in their causal assessments. Using data from the studies, we develop a model to capture the interaction between causal priors and visualized associations as they combine to impact a user\u2019s perceived causal relations. In addition to reporting the study results and analyses, we provide an open dataset of causal priors for 56 specific concept pairs that can serve as a potential benchmark for future studies. We also suggest heuristic-based guidelines to help designers improve visualization design choices to better support visual causal inference.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":true,"name":"Arran Zeyu Wang"},{"affiliations":["UNC-Chapel Hill, Chapel Hill, United States"],"email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":["Davidson College, Davidson, United States"],"email":"tapeck@davidson.edu","is_corresponding":false,"name":"Tabitha C. Peck"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"vaapad@live.unc.edu","is_corresponding":false,"name":"Wenyuan Wang"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arran Zeyu Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1100","time_end":"","time_stamp":"","time_start":"","title":"Causal Priors and Their Influence on Judgements of Causality in Visualized Data","uid":"v-full-1100","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1121":{"abstract":"Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.","accessible_pdf":false,"authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jykim@hcil.snu.ac.kr","is_corresponding":true,"name":"Jaeyoung Kim"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"sihyeon@hcil.snu.ac.kr","is_corresponding":false,"name":"Sihyeon Lee"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"hj@hcil.snu.ac.kr","is_corresponding":false,"name":"Hyeon Jeon"},{"affiliations":["Korea University Guro Hospital, Seoul, Korea, Republic of"],"email":"gooday19@gmail.com","is_corresponding":false,"name":"Keon-Joo Lee"},{"affiliations":["Hankuk University of Foreign Studies, Yongin-si, Korea, Republic of"],"email":"bkim@hufs.ac.kr","is_corresponding":false,"name":"Bohyoung Kim"},{"affiliations":["Seoul National University Bundang Hospital, Seongnam, Korea, Republic of"],"email":"braindoc@snu.ac.kr","is_corresponding":false,"name":"HEE JOON"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jaeyoung Kim"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1121","time_end":"","time_stamp":"","time_start":"","title":"PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets","uid":"v-full-1121","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1128":{"abstract":"Citations allow quickly identifying related research. If multiple publications are selected as seeds, specific suggestions for related literature can be made based on the number of incoming and outgoing citation links to this selection. Interactively adding recommended publications to the selection refines the next suggestion and incrementally builds a relevant collection of publications. Following this approach, the paper presents a search and foraging approach, PUREsuggest, which combines citation-based suggestions with augmented visualizations of the citation network. The focus and novelty of the approach is, first, the transparency of how the rankings are explained visually and, second, that the process can be steered through user-defined keywords, which reflect topics of interests. The system can be used to build new literature collections, to update and assess existing ones, as well as to use the collected literature for identifying relevant experts in the field. We evaluated the recommendation approach through simulated sessions and performed a user study investigating search strategies and usage patterns supported by the interface.","accessible_pdf":false,"authors":[{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"fabian.beck@uni-bamberg.de","is_corresponding":true,"name":"Fabian Beck"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fabian Beck"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1128","time_end":"","time_stamp":"","time_start":"","title":"PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings","uid":"v-full-1128","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1137":{"abstract":"Inspired by recent advances in digital fabrication, artists and scientists have demonstrated that physical data encodings (i.e., data physicalizations) can increase engagement with data, foster collaboration, and in some cases, improve data legibility and analysis relative to digital alternatives. However, prior empirical studies have only investigated abstract data encoded in physical form (e.g., laser cut bar charts) and not continuously sampled spatial data fields relevant to climate and medical science (e.g., heights, temperatures, densities, and velocities sampled on a spatial grid). This paper presents the design and results of the first study to characterize human performance in 3D spatial data analysis tasks across analogous physical and digital visualizations. Participants analyzed continuous spatial elevation data with three visualization modalities: (1) 2D digital visualization; (2) perspective-tracked, stereoscopic \"fishtank\" virtual reality; and (3) 3D printed data physicalization. Their tasks included tracing paths downhill, looking up spatial locations and comparing their relative heights, and identifying and reporting the minimum and maximum heights within certain spatial regions. As hypothesized, in most cases, participants performed the tasks just as well or better in the physical modality (based on time and error metrics). Additional results include an analysis of open-ended feedback from participants and discussion of implications for further research on the value of data physicalization. All data and supplemental materials are available at https://osf.io/7xdq4/?view_only=7416f8cfca85473889456fb69527abbc","accessible_pdf":false,"authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"bridger.g.herman@gmail.com","is_corresponding":true,"name":"Bridger Herman"},{"affiliations":["Beth Israel Deaconess Medical Center, Boston, United States"],"email":"cdjackso@bidmc.harvard.edu","is_corresponding":false,"name":"Cullen D. Jackson"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"dfk@umn.edu","is_corresponding":false,"name":"Daniel F. Keefe"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Bridger Herman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1137","time_end":"","time_stamp":"","time_start":"","title":"Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks","uid":"v-full-1137","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1140":{"abstract":"Written language is a useful mode for non-visual creative activities like writing essays and planning searches. This paper investigates the integration of written language into the visualization design process. We call this idea a `written rudder,' , since it acts as a guiding force or strategy for the design. Via an interview study of 24 working visualization designers, we first established that only a minority of participants systematically use written rudders to aid in design. A second study with 15 visualization designers examined four different variants of rudders: asking questions, stating conclusions, composing a narrative, and writing titles. Overall, participants had a positive reaction; designers recognized the benefits of explicitly writing down components of the design and indicated that they would use this approach in future design work. More specifically, two approaches \u2013- writing questions and writing conclusions/takeaways \u2013- were seen as beneficial across the design process, while writing narratives showed promise mainly for the creation stage. Although concerns around potential bias during data exploration were raised, participants also discussed strategies to mitigate such concerns. This paper contributes to a deeper understanding of the interplay between language and visualization, and proposes a straightforward, lightweight addition to the visualization design process.","accessible_pdf":false,"authors":[{"affiliations":["UC Berkeley, Berkeley, United States"],"email":"chase_stokes@berkeley.edu","is_corresponding":true,"name":"Chase Stokes"},{"affiliations":["Self, Berkeley, United States"],"email":"clarahu@berkeley.edu","is_corresponding":false,"name":"Clara Hu"},{"affiliations":["UC Berkeley, Berkeley, United States"],"email":"hearst@berkeley.edu","is_corresponding":false,"name":"Marti Hearst"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chase Stokes"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1140","time_end":"","time_stamp":"","time_start":"","title":"It's a Good Idea to Put It Into Words: Writing 'Rudders' in the Initial Stages of Visualization Design","uid":"v-full-1140","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1142":{"abstract":"To deploy machine learning (ML) models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many compression experiments, identifying subtle changes in model behavior, and negotiating complex accuracy-efficiency trade-offs. However, existing compression tools poorly support comparison, leading to tedious and, sometimes, incomplete analyses spread across disjoint tools. To support real-world comparative workflows, we develop an interactive visual system called Compress & Compare. Within a single interface, Compress & Compare surfaces promising compression strategies by visualizing provenance relationships between compressed models and reveals compression-induced behavior changes by comparing models' predictions, weights, and activations. We demonstrate how Compress & Compare supports common compression analysis tasks through two case studies\u2014debugging failed compression on generative language models and identifying compression-induced biases in image classification. We further evaluate Compress & Compare in a user study with eight compression experts, illustrating its potential to provide structure to compression workflows, help practitioners build intuition about compression, and encourage thorough analysis of compression\u2019s effect on model behavior. Through these evaluations, we identify compression-specific challenges that future visual analytics tools should consider and Compress & Compare visualizations that may generalize to broader model comparison tasks.","accessible_pdf":false,"authors":[{"affiliations":["Massachusetts Institute of Technology, Cambridge, United States"],"email":"aboggust@mit.edu","is_corresponding":true,"name":"Angie Boggust"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"vsivaram@andrew.cmu.edu","is_corresponding":false,"name":"Venkatesh Sivaraman"},{"affiliations":["Apple, Cambridge, United States"],"email":"yassogba@gmail.com","is_corresponding":false,"name":"Yannick Assogba"},{"affiliations":["Apple, Seattle, United States"],"email":"donghao@apple.com","is_corresponding":false,"name":"Donghao Ren"},{"affiliations":["Apple, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["Apple, Seattle, United States"],"email":"fred.hohman@gmail.com","is_corresponding":false,"name":"Fred Hohman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Angie Boggust"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1142","time_end":"","time_stamp":"","time_start":"","title":"Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments","uid":"v-full-1142","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1147":{"abstract":"Large Language Models (LLMs) like GPT-4 which support multimodal input (i.e., prompts containing images in addition to text) have immense potential to advance visualization research. However, many questions exist about the visual capabilities of such models, including how well they can read and interpret visually represented data. In our work, we address this question by evaluating the GPT-4 multimodal LLM using a suite of task sets meant to assess the model\u2019s visualization literacy. The task sets are based on existing work in the visualization community addressing both automated chart question answering and human visualization literacy across multiple settings. Our assessment finds that GPT-4 can perform tasks such as recognizing trends and extreme values, and also demonstrates some understanding of visualization design best-practices. By contrast, GPT-4 struggles with simple value retrieval when not provided with the original dataset, lacks the ability to reliably distinguish between colors in charts, and occasionally suffers from hallucination and inconsistency. We conclude by reflecting on the model\u2019s strengths and weaknesses as well as the potential utility of models like GPT-4 for future visualization research. We also release all code, stimuli, and results for the task sets at the following link: (REDACTED FOR REVIEW)","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"abendeck3@gatech.edu","is_corresponding":true,"name":"Alexander Bendeck"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"john.stasko@cc.gatech.edu","is_corresponding":false,"name":"John Stasko"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alexander Bendeck"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1147","time_end":"","time_stamp":"","time_start":"","title":"An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks","uid":"v-full-1147","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1150":{"abstract":"Composite visualization represents a widely embraced design that combines multiple visual representations to create an integrated view. However, the traditional approach of creating composite visualizations in immersive environments typically occurs asynchronously outside of the immersive space and is carried out by experienced experts. In this work, we take the first step to empower users to participate in the creation of composite visualization within immersive environments through embodied interactions. This could provide a flexible and fluid experience for data exploration and facilitate a deep understanding of the relationship between data visualizations. We begin with forming a design space of embodied interactions to create various types of composite visualizations with the consideration of data relationships. Drawing inspiration from people's natural experience of manipulating physical objects, we design interactions to directly assemble composite visualizations in immersive environments. Building upon the design space, we present a series of case studies showcasing the interactive method to create different kinds of composite visualizations in Virtual Reality (VR). Subsequently, we conduct a user study to evaluate the usability of the derived interaction techniques and user experience of embodiedly creating composite visualizations. We find that empowering users to participate in composite visualizations through embodied interactions enables them to flexibly leverage different visualization representations for understanding and communicating the relationships between different views, which underscores the potential for a set of application scenarios in the future.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China","The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"qzhual@connect.ust.hk","is_corresponding":true,"name":"Qian Zhu"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States","Georgia Institute of Technology, Atlanta, United States"],"email":"luttul@umich.edu","is_corresponding":false,"name":"Tao Lu"},{"affiliations":["Adobe Research, San Jose, United States","Adobe Research, San Jose, United States"],"email":"sguo@adobe.com","is_corresponding":false,"name":"Shunan Guo"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong","Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"mxj@cse.ust.hk","is_corresponding":false,"name":"Xiaojuan Ma"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States","Georgia Institute of Technology, Atlanta, United States"],"email":"yalongyang@hotmail.com","is_corresponding":false,"name":"Yalong Yang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qian Zhu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1150","time_end":"","time_stamp":"","time_start":"","title":"CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments","uid":"v-full-1150","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1153":{"abstract":"Points of interest on a map such as restaurants, hotels, or subway stations, give rise to categorical point data: data that have a fixed location and one or more categorical attributes. Consequently, recent years have seen various set visualization approaches that visually connect points of the same category to support users in understanding the spatial distribution of categories. Existing methods use complex and often highly irregular shapes to connect points of the same category, leading to high cognitive load for the user. In this paper we introduce SimpleSets that use simple shapes to enclose categorical point patterns and provide a low-complexity overview of the data distribution. We give formal definitions of point patterns that correspond to simple shapes and describe an algorithm that partitions categorical points into few such patterns. Our second contribution is a rendering algorithm that transforms a given partition into a clean set of shapes resulting in an aesthetically pleasing set visualization. Our algorithm pays particular attention to resolving intersections between nearby shapes in a consistent manner. We compare SimpleSets to the state-of-the-art set visualizations using standard datasets from the literature. SimpleSets are designed to visualize disjoint categories, however, we discuss avenues to extend our technique to overlapping set systems.","accessible_pdf":false,"authors":[{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"s.w.v.d.broek@tue.nl","is_corresponding":true,"name":"Steven van den Broek"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"w.meulemans@tue.nl","is_corresponding":false,"name":"Wouter Meulemans"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"b.speckmann@tue.nl","is_corresponding":false,"name":"Bettina Speckmann"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Steven van den Broek"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1153","time_end":"","time_stamp":"","time_start":"","title":"SimpleSets: Capturing Categorical Point Patterns with Simple Shapes","uid":"v-full-1153","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1155":{"abstract":"Interactive visualizations are powerful tools for Exploratory Data Analysis (EDA), but how do they affect the observations analysts make about their data? We conducted a qualitative experiment with 13 professional data scientists analyzing two datasets within Jupyter notebooks, collecting a rich dataset of interaction traces and think-aloud utterances. By qualitatively analyzing participant verbalizations, we introduce the concept of \"observation-analysis states.\" These states capture both the dataset characteristics a participant focuses on and the insights they express. Our definition reveals that interactive visualizations on average lead to earlier and more complex insights about relationships between dataset attributes compared to static visualizations. Moreover, this process identified new measures for studying representation use in notebooks such as hover time, revisiting rate and representational diversity. In particular, revisiting rates revealed behavior where analysts revisit particular representations throughout the time course of an analysis, serving more as navigational aids through an EDA than as strict hypothesis answering tools. We show how these measures helped identify other patterns of analysis behavior, such as the \"80-20 rule\", where a small subset of representations drove the majority of observations. Based on these findings, we offer design guidelines for interactive exploratory analysis tooling and reflect on future directions for studying the role that visualizations play in EDA.","accessible_pdf":false,"authors":[{"affiliations":["MIT, Cambridge, United States"],"email":"dwootton@mit.edu","is_corresponding":true,"name":"Dylan Wootton"},{"affiliations":["MIT, Cambridge, United States"],"email":"amyraefoxphd@gmail.com","is_corresponding":false,"name":"Amy Rae Fox"},{"affiliations":["University of Colorado Boulder, Boulder, United States"],"email":"evan.peck@colorado.edu","is_corresponding":false,"name":"Evan Peck"},{"affiliations":["MIT, Cambridge, United States"],"email":"arvindsatya@mit.edu","is_corresponding":false,"name":"Arvind Satyanarayan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dylan Wootton"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1155","time_end":"","time_stamp":"","time_start":"","title":"Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.","uid":"v-full-1155","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1179":{"abstract":"Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful tools for solving complex optimization problems characterized by multiple, often conflicting, objectives. While advancements have been made in computational efficiency as well as diversity and convergence of solutions, a critical challenge persists: the internal evolutionary mechanisms are opaque to human users. Drawing upon the successes of explainable AI in explaining complex algorithms and models, we argue that the need to understand the underlying evolutionary operators and population dynamics in MOEAs aligns well with a visual analytics paradigm. This paper introduces ParetoTracker, a visual analytics framework designed to support the comprehension and inspection of population dynamics in the evolutionary processes of MOEAs. Informed by preliminary literature review and expert interviews, the framework establishes a multi-level analysis scheme, which caters to user engagement and exploration ranging from examining overall trends in performance metrics to conducting fine-grained inspections of evolutionary operations. In contrast to conventional practices that require manual plotting of solutions for each generation, ParetoTracker facilitates the examination of temporal trends and dynamics across consecutive generations in an integrated visual interface. The effectiveness of the framework is demonstrated through case studies and expert interviews focused on widely adopted benchmark optimization problems.","accessible_pdf":false,"authors":[{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"zhangzr32021@mail.sustech.edu.cn","is_corresponding":false,"name":"Zherui Zhang"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"yangf2020@mail.sustech.edu.cn","is_corresponding":false,"name":"Fan Yang"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"ranchengcn@gmail.com","is_corresponding":false,"name":"Ran Cheng"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"mayx@sustech.edu.cn","is_corresponding":true,"name":"Yuxin Ma"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuxin Ma"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1179","time_end":"","time_stamp":"","time_start":"","title":"ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics","uid":"v-full-1179","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1185":{"abstract":"This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who are unfamiliar with these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn unfamiliar network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then mines the underlying data patterns, and eventually explains both visual and data patterns present in the viewer\u2019s selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to only textual and only visual (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology.","accessible_pdf":false,"authors":[{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":true,"name":"Xinhuan Shu"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"alexis.pister@hotmail.com","is_corresponding":false,"name":"Alexis Pister"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"tangjunxiu@zju.edu.cn","is_corresponding":false,"name":"Junxiu Tang"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":false,"name":"Fanny Chevalier"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xinhuan Shu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1185","time_end":"","time_stamp":"","time_start":"","title":"Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations","uid":"v-full-1185","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1193":{"abstract":"Emerging multimodal large language models (MLLMs) exhibit great potential for chart question answering (CQA). Recent efforts primarily focus on scaling up training datasets (\\ie, charts, data tables, and question-answer (QA) pairs) through data collection and synthesis. However, our empirical study on existing MLLMs and CQA datasets reveals notable gaps. First, current data collection and synthesis focus on data volume and lack consideration of fine-grained visual encodings and QA tasks, resulting in unbalanced data distribution divergent from practical CQA scenarios. Second, existing work follows the training recipe of the base MLLMs initially designed for natural images, under-exploring the adaptation to unique chart characteristics, such as rich text elements. To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. Specifically, we propose a novel data engine to effectively filter diverse and high-quality data from existing datasets and subsequently refine and augment the data using LLM-based generation techniques to better align with practical QA tasks and visual encodings. Then, to facilitate the adaptation to chart characteristics, we utilize the enriched data to train an MLLM by unfreezing the vision encoder and incorporating a mixture-of-resolution adaptation strategy for enhanced fine-grained recognition. Experimental results validate the effectiveness of our approach. Even with fewer training examples, our model consistently outperforms state-of-the-art CQA models on established benchmarks. We also contribute a dataset split as a benchmark for future research.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"xingchen.zeng@outlook.com","is_corresponding":true,"name":"Xingchen Zeng"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"hlin386@connect.hkust-gz.edu.cn","is_corresponding":false,"name":"Haichuan Lin"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"yyebd@connect.ust.hk","is_corresponding":false,"name":"Yilin Ye"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China","The Hong Kong University of Science and Technology, Hong Kong SAR, China"],"email":"weizeng@hkust-gz.edu.cn","is_corresponding":false,"name":"Wei Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xingchen Zeng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1193","time_end":"","time_stamp":"","time_start":"","title":"Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning","uid":"v-full-1193","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1202":{"abstract":"The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and visual data analysis tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with several variables including personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual's personality traits.","accessible_pdf":false,"authors":[{"affiliations":["Emory University, Atlanta, United States"],"email":"mengyu.chen@emory.edu","is_corresponding":true,"name":"Mengyu Chen"},{"affiliations":["Emory University, Atlanta, United States"],"email":"yijun.liu2@emory.edu","is_corresponding":false,"name":"Yijun Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mengyu Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1202","time_end":"","time_stamp":"","time_start":"","title":"Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis","uid":"v-full-1202","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1204":{"abstract":"We present ProvenanceWidgets, a Javascript library of UI control elements such as radio buttons, checkboxes, and dropdowns to track and dynamically overlay a user's analytic provenance. These in situ overlays not only save screen space but also minimize the amount of time and effort needed to access the same information from elsewhere in the UI. In this paper, we discuss how we design modular UI control elements to track how often and how recently a user interacts with them and design visual overlays showing an aggregated summary as well as a detailed temporal history. We demonstrate the capability of ProvenanceWidgets by recreating three prior widget libraries: (1) Scented Widgets, (2) Phosphor objects, and (3) Dynamic Query Widgets. We also evaluated its expressiveness and conducted case studies with visualization developers to evaluate its effectiveness. We find that ProvenanceWidgets enables developers to implement custom provenance-tracking applications effectively. ProvenanceWidgets is available as open-source software at https://github.com/ProvenanceWidgets to help application developers build custom provenance-based systems.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"arpitnarechania@gatech.edu","is_corresponding":true,"name":"Arpit Narechania"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"kaustubhodak1@gmail.com","is_corresponding":false,"name":"Kaustubh Odak"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"melassady@ai.ethz.ch","is_corresponding":false,"name":"Mennatallah El-Assady"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"endert@gatech.edu","is_corresponding":false,"name":"Alex Endert"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arpit Narechania"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1204","time_end":"","time_stamp":"","time_start":"","title":"ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance","uid":"v-full-1204","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1214":{"abstract":"Graphs are often used to model relationships between entities. The identification and visualization of clusters in graphs enable insight discovery in many application areas, such as life sciences and social sciences. Force-directed graph layout algorithms promote the visual saliency of clusters, as they generally bring adjacent nodes closer together, and push non-adjacent nodes apart. In this work, we study the impact of node ordering on the visual saliency of clusters in orderable node-link diagrams, namely radial diagrams, arc diagrams and symmetric arc diagrams. Through a crowdsourced controlled experiment, we show that users can count clusters consistently more accurately, and to a large extent faster, with orderable node-link diagrams than with three state-of-the art force-directed layout algorithms, i.e., `Linlog', `Backbone' and, `sfdp'. The measured advantage is greater in case of low cluster separability and/or low compactness. A free copy of this paper and all supplemental materials are available at https://osf.io/kc3dg/?view_only=892f7b96752e40a6baefb2e50e866f9d","accessible_pdf":false,"authors":[{"affiliations":["Luxembourg Institute of Science and Technology, Esch-sur-Alzette, Luxembourg"],"email":"nora.alnaami@list.lu","is_corresponding":false,"name":"Nora Al-Naami"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"nicolas.medoc@list.lu","is_corresponding":false,"name":"Nicolas Medoc"},{"affiliations":["Uppsala University, Uppsala, Sweden"],"email":"matteo.magnani@it.uu.se","is_corresponding":false,"name":"Matteo Magnani"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"mohammad.ghoniem@list.lu","is_corresponding":true,"name":"Mohammad Ghoniem"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mohammad Ghoniem"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1214","time_end":"","time_stamp":"","time_start":"","title":"Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts","uid":"v-full-1214","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1218":{"abstract":"Placing text labels is a common way to explain key elements in a given scene. Given a graphic input and original label information, how to place labels to meet both geometric and aesthetic requirements is an open challenging problem. Geometry-wise, traditional rule-driven solutions struggle to capture the complex interactions between labels, let alone consider graphical/appearance content. In terms of aesthetics, training/evaluation data ideally require nontrivial effort and expertise in design, thus resulting in a lack of decent datasets for learning-based methods. To address the above challenges, we formulate the task with a graph representation, where nodes correspond to labels and edges to the between-label interactions, and treat label placement as a node position prediction problem. With this novel representation, we design a Label Placement Graph Transformer (LPGT) to predict label positions. Specifically, edge-level attention, conditioned on node representations, is introduced to reveal potential relationships between labels. To integrate graphic/image information, we design a feature aligning strategy that extracts deep features for nodes and edges efficiently. Next, to address the dataset issue, we collect commercial illustrations with professionally designed label layouts from household appliance manuals, and annotate them with useful information to create a novel dataset named the Appliance Manual Illustration Labels (AMIL) dataset. In the thorough evaluation on AMIL, our LPGT solution achieves promising label placement performance compared with popular baselines.","accessible_pdf":false,"authors":[{"affiliations":["Southwest University, Beibei, China"],"email":"qujingwei@swu.edu.cn","is_corresponding":true,"name":"Jingwei Qu"},{"affiliations":["Southwest University, Chongqing, China"],"email":"z2211973606@email.swu.edu.cn","is_corresponding":false,"name":"Pingshun Zhang"},{"affiliations":["Southwest University, Beibei, China"],"email":"enyuche@gmail.com","is_corresponding":false,"name":"Enyu Che"},{"affiliations":["COLLEGE OF COMPUTER AND INFORMATION SCIENCE, SOUTHWEST UNIVERSITY SCHOOL OF SOFTWAREC, Chongqin, China"],"email":"out1147205215@outlook.com","is_corresponding":false,"name":"Yinan Chen"},{"affiliations":["Stony Brook University, New York, United States"],"email":"hling@cs.stonybrook.edu","is_corresponding":false,"name":"Haibin Ling"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jingwei Qu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1218","time_end":"","time_stamp":"","time_start":"","title":"Graph Transformer for Label Placement","uid":"v-full-1218","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1232":{"abstract":"How do cancer cells grow, divide, proliferate and die? How do drugs influence these processes? These are difficult questions that we can attempt to answer with a combination of time-series microscopy experiments, classification algorithms, and data visualization. However, collecting this type of data and applying algorithms to segment and track cells and construct lineages of proliferation is error-prone; and identifying the errors can be challenging since it often requires cross-checking multiple data types. Similarly, analyzing and communicating the results necessitates synthesizing different data types into a single narrative. State-of-the-art visualization methods for such data use independent line charts, tree diagrams, and images in separate views. However, this spatial separation requires the viewer of these charts to combine the relevant pieces of data in memory. To simplify this challenging task, we describe design principles for weaving cell images, time-series data, and tree data into a cohesive visualization. Our design principles are based on choosing a primary data type that drives the layout and integrates the other data types into that layout. We then introduce Aardvark, a system that uses these principles to implement novel visualization techniques. Based on Aardvark, we demonstrate the utility of each of these approaches for discovery, communication, and data debugging in a series of case studies.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"devin@sci.utah.edu","is_corresponding":true,"name":"Devin Lange"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"robert.judson-torres@hci.utah.edu","is_corresponding":false,"name":"Robert L Judson-Torres"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"tzangle@chemeng.utah.edu","is_corresponding":false,"name":"Thomas A Zangle"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"alex@sci.utah.edu","is_corresponding":false,"name":"Alexander Lex"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Devin Lange"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1232","time_end":"","time_stamp":"","time_start":"","title":"Aardvark: Composite Visualizations of Trees, Time-Series, and Images","uid":"v-full-1232","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1251":{"abstract":"Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks that lead to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook history, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only improves the reproducibility of notebooks, but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate utility and potential impact of our approach in two use cases and feedback from notebook users from a range of backgrounds.","accessible_pdf":false,"authors":[{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"klaus@eckelt.info","is_corresponding":true,"name":"Klaus Eckelt"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"kirangadhave2@gmail.com","is_corresponding":false,"name":"Kiran Gadhave"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"alex@sci.utah.edu","is_corresponding":false,"name":"Alexander Lex"},{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"marc.streit@jku.at","is_corresponding":false,"name":"Marc Streit"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Klaus Eckelt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1251","time_end":"","time_stamp":"","time_start":"","title":"Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks","uid":"v-full-1251","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1256":{"abstract":"People commonly utilize visualizations not only to examine a given dataset, but also to draw generalizable conclusions about the underlying models or phenomena. Previous research has compared human visual inference to that of an optimal Bayesian agent, with deviations from rational analysis viewed as problematic. However, human reliance on non-normative heuristics may prove advantageous in certain circumstances. We investigate scenarios where human intuition might surpass idealized statistical rationality. In two experiments, we examine individuals' accuracy in characterizing the parameters of known data-generating models from bivariate visualizations. Our findings indicate that, although participants generally exhibited lower accuracy compared to statistical models, they frequently outperformed Bayesian agents, particularly when faced with extreme samples. Participants appeared to rely on their internal models to filter out noisy visualizations, thus improving their resilience against spurious data. However, participants displayed overconfidence and struggled with uncertainty estimation. They also exhibited higher variance than statistical machines. Our findings suggest that analyst gut reactions to visualizations may provide an advantage, even when departing from rationality. These results carry implications for designing visual analytics tools, offering new perspectives on how to integrate statistical models and analyst intuition for improved inference and decision-making.","accessible_pdf":false,"authors":[{"affiliations":["Indiana University, Indianapolis, United States"],"email":"rkoonch@iu.edu","is_corresponding":true,"name":"Ratanond Koonchanok"},{"affiliations":["Argonne National Laboratory, Lemont, United States","University of Illinois Chicago, Chicago, United States"],"email":"papka@anl.gov","is_corresponding":false,"name":"Michael E. Papka"},{"affiliations":["Indiana University, Indianapolis, United States"],"email":"redak@iu.edu","is_corresponding":false,"name":"Khairi Reda"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ratanond Koonchanok"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1256","time_end":"","time_stamp":"","time_start":"","title":"Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations","uid":"v-full-1256","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1258":{"abstract":"Providing effective guidance for users has long been an important and challenging task for efficient exploratory visual analytics, especially when selecting variables for visualization in high-dimensional datasets. Correlation is the most widely applied metric for guidance in statistical and analytical tools, however a reliance on correlation may lead users towards false positives when interpreting causal relations in the data. In this work, inspired by prior insights on the benefits of counterfactual visualization in supporting visual causal inference, we propose a novel, simple, and efficient counterfactual guidance method to enhance causal inference performance in guided exploratory analytics based on insights and concerns gathered from expert interviews. Our technique aims to capitalize on the benefits of counterfactual approaches while reducing their complexity for users. We integrated counterfactual guidance into an exploratory visual analytics system, and using a synthetically generated ground-truth causal dataset, conducted a comparative user study and evaluated to what extent counterfactual guidance can help lead users to more precise visual causal inferences. The results suggest that counterfactual guidance improved visual causal inference performance, and also led to different exploratory behaviors compared to correlation-based guidance. Based on these findings, we offer future directions to incorporate and examine counterfactual guidance to better support exploratory visual analytics.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":true,"name":"Arran Zeyu Wang"},{"affiliations":["UNC-Chapel Hill, Chapel Hill, United States"],"email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arran Zeyu Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1258","time_end":"","time_stamp":"","time_start":"","title":"Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis","uid":"v-full-1258","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1272":{"abstract":"In various scientific and industrial domains, analyzing multivariate spatial data, i.e., vectors associated with spatial locations, is common practice. To analyze those datasets, analysts may turn to models such as Spatial Blind Source Separation (SBSS). Designed explicitly for spatial data analysis, SBSS finds latent components in the dataset and is superior to popular non-spatial models, like PCA. However, when analysts try different tuning parameter settings, the amount of latent components complicates analytical tasks. Based on our years-long collaboration with SBSS researchers, we propose a visualization approach to tackle this challenge. The main component is UnDRground Tubes (UT), a general-purpose idiom combining ideas from set visualization and multidimensional projections. We describe the UT visualization pipeline and integrate UT into an interactive multiple-view system. We demonstrate its effectiveness through interviews with SBSS experts, a qualitative evaluation with visualization experts, and computational experiments. SBSS experts were excited about our approach. They saw many benefits for their work and potential applications for geostatistical data analysis more generally. UT was also very well received by visualization experts. Our benchmarks show that UT projections and its heuristics are appropriate.","accessible_pdf":false,"authors":[{"affiliations":["TU Wien, Vienna, Austria"],"email":"nikolaus.piccolotto@tuwien.ac.at","is_corresponding":true,"name":"Nikolaus Piccolotto"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"mwallinger@ac.tuwien.ac.at","is_corresponding":false,"name":"Markus Wallinger"},{"affiliations":["Institute of Visual Computing and Human-Centered Technology, Vienna, Austria"],"email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"markus.boegl@tuwien.ac.at","is_corresponding":false,"name":"Markus B\u00f6gl"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nikolaus Piccolotto"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1272","time_end":"","time_stamp":"","time_start":"","title":"UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization","uid":"v-full-1272","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1275":{"abstract":"We developed and validated an instrument to measure the perceived readability in data visualization: PREVis. Researchers and practitioners can easily use this instrument as part of their evaluations to compare the perceived readability of different visual data representations. Our instrument can complement results from controlled experiments on user task performance or provide additional data during in-depth qualitative work such as design iterations when developing a new technique. Although readability is recognized as an essential quality of data visualizations, so far there has not been a unified definition of the construct in the context of visual representations. As a result, researchers often lack guidance for determining how to ask people to rate their perceived readability of a visualization. To address this issue, we engaged in a rigorous process to develop the first validated instrument targeted at the subjective readability of visual data representations. Our final instrument consists of 11 items across 4 dimensions: understandability, layout clarity, readability of data values, and readability of data patterns. We provide the questionnaire as a document with implementation guidelines on osf.io/9cg8j. Beyond this instrument, we contribute a discussion of how researchers have previously assessed visualization readability, and an analysis of the factors underlying perceived readability in visual data representations.","accessible_pdf":false,"authors":[{"affiliations":["LISN, Universit\u00e9 Paris Saclay, CNRS, Orsay, France","Aviz, Inria, Saclay, France"],"email":"acabouat@gmail.com","is_corresponding":true,"name":"Anne-Flore Cabouat"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tingying.he@inria.fr","is_corresponding":false,"name":"Tingying He"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anne-Flore Cabouat"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1275","time_end":"","time_stamp":"","time_start":"","title":"PREVis: Perceived Readability Evaluation for Visualizations","uid":"v-full-1275","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1277":{"abstract":"This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.","accessible_pdf":false,"authors":[{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":true,"name":"Tushar M. Athawale"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"wangz@ornl.gov","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kmorel@acm.org","is_corresponding":false,"name":"Kenneth Moreland"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"gongq@ornl.gov","is_corresponding":false,"name":"Qian Gong"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"klasky@ornl.gov","is_corresponding":false,"name":"Scott Klasky"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"paul.rosen@utah.edu","is_corresponding":false,"name":"Paul Rosen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tushar M. Athawale"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1277","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models","uid":"v-full-1277","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1281":{"abstract":"Participatory budgeting (PB) is a democratic approach to allocating municipal spending that has been adopted in many places in recent years, including in Chicago. Current PB voting resembles a ballot where residents are asked which municipal projects, such as school improvements and road repairs, to fund with a limited budget. In this work, we ask how interactive visualization can benefit PB by conducting a design probe-based interview study (N=13) with policy workers and academics with expertise in PB, urban planning, and civic HCI. Our probe explores how graphical elicitation of voter preferences and a dashboard of voting statistics can be incorporated into a realistic PB tool. Through qualitative analysis, we find that visualization creates opportunities for city government to set expectations about budget constraints while also granting their constituents greater freedom to articulate a wider range of preferences. However, using visualization to provide transparency about PB requires efforts to mitigate potential access barriers and mistrust. We call for more visualization professionals to help build civic capacity by working in and studying political systems.","accessible_pdf":false,"authors":[{"affiliations":["University of Chicago, Chicago, United States"],"email":"kalea@uchicago.edu","is_corresponding":true,"name":"Alex Kale"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"danni6@uchicago.edu","is_corresponding":false,"name":"Danni Liu"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"mariagabrielaa@uchicago.edu","is_corresponding":false,"name":"Maria Gabriela Ayala"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"hwschwab@uchicago.edu","is_corresponding":false,"name":"Harper Schwab"},{"affiliations":["University of Washington, Seattle, United States","University of Utah, Salt Lake City, United States"],"email":"mcnutt.andrew@gmail.com","is_corresponding":false,"name":"Andrew M McNutt"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alex Kale"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1281","time_end":"","time_stamp":"","time_start":"","title":"What Can Interactive Visualization do for Participatory Budgeting in Chicago?","uid":"v-full-1281","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1288":{"abstract":"Data tables are one of the most common ways in which people encounter data. Although mostly built with text and numbers, data tables have a spatial layout and often exhibit visual elements meant to facilitate their reading. Surprisingly, there is an empirical knowledge gap on how people read and use tables and how different visual aids affect people's ability to use them. In this work, we seek to address this vacuum through a controlled study. We asked participants to repeatedly perform four different tasks with tables in four table representation conditions (plain tables, tables with zebra striping, tables with cell background color encoding cell value, and tables with background bar length in a cell encoding cell value). We analyzed completion time, error rate, gaze-tracking data, mouse movement and participant preferences. We found that visual encodings help for finding maximum values (especially color), but not as much as zebra striping helps in a complex task (comparison of proportional differences). We also characterize typical human behavior for the different tasks. These findings can inform the design of tables and research directions for improving presentation of data in tabular form.","accessible_pdf":false,"authors":[{"affiliations":["University of Victoria, Victoria, Canada"],"email":"yongfengji@uvic.ca","is_corresponding":false,"name":"YongFeng Ji"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"nacenta@gmail.com","is_corresponding":false,"name":"Miguel A Nacenta"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Charles Perin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1288","time_end":"","time_stamp":"","time_start":"","title":"The Effect of Visual Aids on Reading Numeric Data Tables","uid":"v-full-1288","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1290":{"abstract":"Visualization linters are end-user facing evaluators that automatically identify potential chart issues. These spell-checker like systems offer a blend of interpretability and customization that is not found in other forms of automated assistance. However, existing linters do not model context and have primarily targeted users who do not need assistance, resulting in obvious---even annoying---advice. We investigate these issues within the domain of color palette design, which serves as a microcosm of visualization design concerns. We contribute a GUI-based color palette linter as a design probe that covers perception, accessibility, context, and other design criteria, and use it to explore visual explanations, integrated fixes, and user-defined linting rules. Through a formative interview study and theory-driven analysis, we find that linters can be meaningfully integrated into graphical contexts thereby addressing many of their core issues. We discuss implications for integrating linters into visualization tools, developing improved assertion languages, and supporting end-user tunable advice---all laying the groundwork for more effective visualization linters in any context.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States","University of Utah, Salt Lake City, United States"],"email":"mcnutt.andrew@gmail.com","is_corresponding":true,"name":"Andrew M McNutt"},{"affiliations":["University of Washington, Seattle, United States"],"email":"maureen.stone@gmail.com","is_corresponding":false,"name":"Maureen Stone"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Andrew M McNutt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1290","time_end":"","time_stamp":"","time_start":"","title":"Mixing Linters with GUIs: A Color Palette Design Probe","uid":"v-full-1290","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1291":{"abstract":"Emotion is an important factor to consider when designing visualizations as it can impact the amount of trust viewers place in a visualization, how well they can retrieve information and understand the underlying data, and how much they engage with or connect to a visualization. We conducted five crowdsourced experiments to quantify the effects of color, chart type, data trend, data variability and data density on emotion (measured through self-reported arousal and valence). Results from our experiments show that there are multiple design elements which influence the emotion induced by a visualization and, more surprisingly, that certain data characteristics influence the emotion of viewers even when the data has no meaning. In light of these findings, we offer guidelines on how to use color, scale, and chart type to counterbalance and emphasize the emotional impact of immutable data characteristics.","accessible_pdf":false,"authors":[{"affiliations":["University of Waterloo, Waterloo, Canada","University of Victoria, Victoria, Canada"],"email":"cartergblair@gmail.com","is_corresponding":false,"name":"Carter Blair"},{"affiliations":["University of Victoria, Victoira, Canada","Delft University of Technology, Delft, Netherlands"],"email":"xiyao.wang23@gmail.com","is_corresponding":false,"name":"Xiyao Wang"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Charles Perin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1291","time_end":"","time_stamp":"","time_start":"","title":"Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations","uid":"v-full-1291","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1295":{"abstract":"Annotations play a vital role in highlighting critical aspects of visualizations, aiding in data externalization and exploration, collaborative data analysis, and visual storytelling. However, despite their widespread use, we identified a lack of a design space for common practices for annotations. In this paper, we evaluated over 1,800 static annotated charts to understand how people annotate visualizations in practice. Through qualitative coding of these diverse real-world annotated charts, we explore three primary aspects of annotation usage patterns: analytic purposes for chart annotations (e.g., present, identify, summarize, or compare data features), mechanisms for chart annotations (e.g., types and combinations of annotations used, frequency of different annotation types across chart types, etc.), and the data source used to generate the annotations. We then synthesized our findings into a design space of annotations, highlighting key design choices for chart annotations. We presented three case studies illustrating our design space as a practical framework for chart annotations to enhance the communication of visualization insights.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States","University of Utah, Salt Lake City, United States"],"email":"dilshadur@sci.utah.edu","is_corresponding":true,"name":"Md Dilshadur Rahman"},{"affiliations":["University of Oklahoma, Norman, United States","University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":false,"name":"Ghulam Jilani Quadri"},{"affiliations":["University of South Florida , Tampa, United States","University of South Florida , Tampa, United States"],"email":"bdoppalapudi@usf.edu","is_corresponding":false,"name":"Bhavana Doppalapudi"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States","University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"},{"affiliations":["University of Utah, Salt Lake City, United States","University of Utah, Salt Lake City, United States"],"email":"paul.rosen@utah.edu","is_corresponding":false,"name":"Paul Rosen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Md Dilshadur Rahman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1295","time_end":"","time_stamp":"","time_start":"","title":"A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space","uid":"v-full-1295","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1302":{"abstract":"We present the results of an exploratory study on how pairs interact with speech commands and touch gestures on a wall-sized display during a collaborative sensemaking task. Previous work has shown that speech commands, alone or in combination with other input modalities, can support visual data exploration by individuals. However, it is still unknown whether and how speech commands can be used in collaboration, and for what tasks. To answer these questions, we developed a functioning prototype that we used as a technology probe. We conducted an in-depth exploratory study with 20 participants (10 pairs) to analyze their interaction choices, the interplay between the input modalities, and their collaboration. While touch was the most used modality, we found that participants preferred speech commands for global operations, used them for distant interaction, and that speech interaction contributed to the awareness of the partner\u2019s actions. Furthermore, the likelihood of using speech commands during collaboration was related to the personality trait of agreeableness. Regarding collaboration styles, participants interacted with speech equally often whether they were in loosely or closely coupled collaboration. While the partners stood closer to each other during close collaboration, they did not walk away from their partner to use speech commands. From our findings, we derive and contribute a set of design considerations for collaborative and multimodal interactive data analysis systems.","accessible_pdf":false,"authors":[{"affiliations":["University of Bremen, Bremen, Germany","University of Bremen, Bremen, Germany"],"email":"molina@uni-bremen.de","is_corresponding":true,"name":"Gabriela Molina Le\u00f3n"},{"affiliations":["LISN, Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"anastasia.bezerianos@universite-paris-saclay.fr","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":["Inria, Palaiseau, France"],"email":"olivier.gladin@inria.fr","is_corresponding":false,"name":"Olivier Gladin"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gabriela Molina Le\u00f3n"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1302","time_end":"","time_stamp":"","time_start":"","title":"Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics","uid":"v-full-1302","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1307":{"abstract":"Building information modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, building energy modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building\u2019s energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identification, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and understanding throughout the conversion process. By evaluating user feedback, we could show that BEMTrace can solve domain-specific tasks.","accessible_pdf":false,"authors":[{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"walch@vrvis.at","is_corresponding":false,"name":"Andreas Walch"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"szabo@vrvis.at","is_corresponding":false,"name":"Attila Szabo"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"hs@vrvis.at","is_corresponding":false,"name":"Harald Steinlechner"},{"affiliations":["Independent Researcher, Vienna, Austria"],"email":"thomas@ortner.fyi","is_corresponding":false,"name":"Thomas Ortner"},{"affiliations":["Institute of Visual Computing "," Human-Centered Technology, Vienna, Austria"],"email":"groeller@cg.tuwien.ac.at","is_corresponding":false,"name":"Eduard Gr\u00f6ller"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"johanna.schmidt@vrvis.at","is_corresponding":true,"name":"Johanna Schmidt"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Johanna Schmidt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1307","time_end":"","time_stamp":"","time_start":"","title":"BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM","uid":"v-full-1307","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1309":{"abstract":"Visualizations play a critical role in validating and improving statistical models. However, the design space of model check visualizations is not well understood, making it difficult for authors to explore and specify effective graphical model checks. VMC defines a model check visualization using four components: (1) samples of distributions of checkable quantities generated from the model, including predictive distributions for new data and distributions of model parameters; (2) transformations on observed data to facilitate comparison; (3) visual representations of distributions; and (4) layouts to facilitate comparing model samples and observed data. We contribute an implementation of VMC as an R package. We validate VMC by reproducing a set of canonical model check examples, and show how using VMC to generate model checks reduces the edit distance between visualizations relative to existing visualization toolkits. The findings of an interview study with three expert modelers who used VMC highlight challenges and opportunities for encouraging exploration of correct, effective model check visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"ziyangguo1030@gmail.com","is_corresponding":true,"name":"Ziyang Guo"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"kalea@uchicago.edu","is_corresponding":false,"name":"Alex Kale"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"jhullman@northwestern.edu","is_corresponding":false,"name":"Jessica Hullman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ziyang Guo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1309","time_end":"","time_stamp":"","time_start":"","title":"VMC: A Grammar for Visualizing Statistical Model Checks","uid":"v-full-1309","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1316":{"abstract":"We apply an approach from cognitive linguistics by mapping Conceptual Metaphor Theory (CMT) to the visualization domain to address patterns of visual conceptual metaphors that are often used in science infographics. Metaphors play an essential part in visual communication and are frequently employed to explain complex concepts. However, their use is often based on intuition, rather than following a formal process. At present, we lack tools and language for understanding and describing metaphor use in visualization to the extent where taxonomy and grammar could guide the creation of visual components, e.g., infographics. Our classification of the visual conceptual mappings within scientific representations is based on the breakdown of visual components in existing scientific infographics. We demonstrate the development of this mapping through a detailed analysis of data collected from four domains (biomedicine, climate, space, and anthropology) that represent a diverse range of visual conceptual metaphors used in the visual communication of science. This work allows us to identify patterns of visual conceptual metaphor use within the domains, resolve ambiguities about why specific conceptual metaphors are used, and develop a better overall understanding of visual metaphor use in scientific infographics. Our analysis shows that ontological and orientational conceptual metaphors are the most widely applied to translate complex scientific concepts. To support our findings we developed a visual exploratory tool based on the collected database that places the individual infographics on a spatio-temporal scale and illustrates the breakdown of visual conceptual metaphors.","accessible_pdf":false,"authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"hana.pokojna@gmail.com","is_corresponding":true,"name":"Hana Pokojn\u00e1"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["University of Rostock, Rostock, Germany"],"email":"stefan.bruckner@gmail.com","is_corresponding":false,"name":"Stefan Bruckner"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"},{"affiliations":["University of Bergen, Bergen, Norway","Haukeland University Hospital, University of Bergen, Bergen, Norway"],"email":"laura.garrison@uib.no","is_corresponding":false,"name":"Laura Garrison"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hana Pokojn\u00e1"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1316","time_end":"","time_stamp":"","time_start":"","title":"The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling","uid":"v-full-1316","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1318":{"abstract":"In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models' analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments--from initial exploration to detailed analysis--we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates their applicability in addressing the pressing concern of misleading charts.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"yhload@cse.ust.hk","is_corresponding":true,"name":"Leo Yu-Ho Lo"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leo Yu-Ho Lo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1318","time_end":"","time_stamp":"","time_start":"","title":"How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations","uid":"v-full-1318","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1325":{"abstract":"Dynamic data visualizations can convey large amounts of information over time, such as using motion to depict changes in data values for multiple entities. Such dynamic displays put a demand on our visual processing capacities, yet our perception of motion is limited. When tracking multiple objects across space and time, humans can typically track up to four objects, and the capacity is even lower if we also need to remember the history of the objects\u2019 features. Several techniques have been shown to improve the processing of dynamic displays. Staging the animation to sequentially show steps in a transition and tracing object movement by displaying trajectory histories can increase processing by reducing the cognitive load. In this paper, We examine the effectiveness of staging and tracing in dynamic displays. We showed participants animated line charts depicting the movements of lines and asked them to identify the line with the highest mean and variance. We manipulated the animation to display the lines with or without staging, tracing and history, and compared the results to a static chart as a control. Results showed that tracing and staging are preferred by participants, and improve their performance in mean and variance tasks respectively. The preferred display time 3 times shorter when staging is used. Also, encoding animation speed with mean and variance in congruent tasks is associated with higher accuracy. These findings help inform real-world best practices for building dynamic displays that leverage the strength of humans' visual processing.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"shu343@gatech.edu","is_corresponding":true,"name":"Songwen Hu"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"ouxunjiang@u.northwestern.edu","is_corresponding":false,"name":"Ouxun Jiang"},{"affiliations":["Dolby Laboratories Inc., San Francisco, United States"],"email":"jcr@dolby.com","is_corresponding":false,"name":"Jeffrey Riedmiller"},{"affiliations":["Georgia Tech, Atlanta, United States","University of Massachusetts Amherst, Amherst, United States"],"email":"cxiong@gatech.edu","is_corresponding":false,"name":"Cindy Xiong Bearfield"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Songwen Hu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1325","time_end":"","time_stamp":"","time_start":"","title":"Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series","uid":"v-full-1325","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1326":{"abstract":"Evaluating the quality of text responses generated by large language models (LLMs) poses unique challenges compared to traditional machine learning. While automatic side-by-side evaluation has emerged as a promising approach, LLM developers face scalability and interpretability challenges in analyzing these evaluation results. In this paper, we present LLM Comparator, a novel visual analytics tool for interactively analyzing results from side-by-side evaluation of LLMs. The tool provides users with interactive workflows to understand when and why a model performs better or worse than a baseline model, and how the responses from two models differ qualitatively. We iteratively designed and developed the tool by closely working with researchers and engineers at a large technology company. Qualitative feedback from users highlights that the tool facilitates in-depth analysis of individual examples while enabling users to visually overview and flexibly slice data. This empowers users to identify undesirable patterns, formulate hypotheses about model behavior, and gain insights for model improvement.","accessible_pdf":false,"authors":[{"affiliations":["Google, Atlanta, United States"],"email":"minsuk.kahng@gmail.com","is_corresponding":true,"name":"Minsuk Kahng"},{"affiliations":["Google Research, Seattle, United States"],"email":"iftenney@google.com","is_corresponding":false,"name":"Ian Tenney"},{"affiliations":["Google Research, Cambridge, United States"],"email":"mahimap@google.com","is_corresponding":false,"name":"Mahima Pushkarna"},{"affiliations":["Google Research, Pittsburgh, United States"],"email":"lxieyang.cmu@gmail.com","is_corresponding":false,"name":"Michael Xieyang Liu"},{"affiliations":["Google Research, Cambridge, United States"],"email":"jwexler@google.com","is_corresponding":false,"name":"James Wexler"},{"affiliations":["Google, Cambridge, United States"],"email":"ereif@google.com","is_corresponding":false,"name":"Emily Reif"},{"affiliations":["Google Research, Mountain View, United States"],"email":"kallarackal@google.com","is_corresponding":false,"name":"Krystal Kallarackal"},{"affiliations":["Google Research, Seattle, United States"],"email":"minsuk.cs@gmail.com","is_corresponding":false,"name":"Minsuk Chang"},{"affiliations":["Google, Cambridge, United States"],"email":"michaelterry@google.com","is_corresponding":false,"name":"Michael Terry"},{"affiliations":["Google, Paris, France"],"email":"ldixon@google.com","is_corresponding":false,"name":"Lucas Dixon"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Minsuk Kahng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1326","time_end":"","time_stamp":"","time_start":"","title":"LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models","uid":"v-full-1326","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1329":{"abstract":"The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students' learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students' interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master's level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students' interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT's responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system's effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz's capacity to enhance educators' insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"zchendf@connect.ust.hk","is_corresponding":true,"name":"Zixin Chen"},{"affiliations":["The Hong Kong University of Science and Technology, Sai Kung, China"],"email":"csejiachenw@ust.hk","is_corresponding":false,"name":"Jiachen Wang"},{"affiliations":["Texas A","M University, College Station, United States"],"email":"xiameng9355@gmail.com","is_corresponding":false,"name":"Meng Xia"},{"affiliations":["The Hong Kong University of Science and Technology, Kowloon, Hong Kong"],"email":"kshigyo@connect.ust.hk","is_corresponding":false,"name":"Kento Shigyo"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"dliuak@connect.ust.hk","is_corresponding":false,"name":"Dingdong Liu"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"rzhangab@connect.ust.hk","is_corresponding":false,"name":"Rong Zhang"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zixin Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1329","time_end":"","time_stamp":"","time_start":"","title":"StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions","uid":"v-full-1329","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1332":{"abstract":"Translating natural language to visualization (NL2VIS) has shown great promise for visual data analysis, but it remains a challenging task that requires multiple low-level implementations, such as natural language processing and visualization design. Recent advancements in pre-trained large language models (LLMs) are opening new avenues for generating visualizations from natural language. However, the lack of a comprehensive and reliable benchmark hinders our understanding of LLMs\u2019 capabilities in visualization generation. In this paper, we address this gap by proposing a new NL2VIS benchmark called VisEval. Firstly, we introduce a high-quality and large-scale dataset. This dataset includes 2,524 representative queries covering 146 databases, paired with accurately labeled ground truths. Secondly, we advocate for a comprehensive automated evaluation methodology covering multiple dimensions, including validity, legality, and readability. By systematically scanning for potential issues with a number of heterogeneous checkers, VisEval provides reliable and trustworthy evaluation outcomes. We run VisEval on a series of state-of-the-art LLMs. Our evaluation reveals prevalent challenges and delivers essential insights for future advancements.","accessible_pdf":false,"authors":[{"affiliations":["Microsoft Research, Shanghai, China"],"email":"christy05.chen@gmail.com","is_corresponding":true,"name":"Nan Chen"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"scottyugochang@gmail.com","is_corresponding":false,"name":"Yuge Zhang"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"jiahangxu@microsoft.com","is_corresponding":false,"name":"Jiahang Xu"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"rk.ren@outlook.com","is_corresponding":false,"name":"Kan Ren"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"yuqyang@microsoft.com","is_corresponding":false,"name":"Yuqing Yang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nan Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1332","time_end":"","time_stamp":"","time_start":"","title":"VisEval: A Benchmark for Data Visualization in the Era of Large Language Models","uid":"v-full-1332","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1333":{"abstract":"Data videos increasingly becoming a popular data storytelling form represented by visual and audio integration. In recent years, more and more researchers have explored many narrative structures for effective and attractive data storytelling. Meanwhile, the Hero's Journey provides a classic narrative framework specific to the Hero's story that has been adopted by various mediums. There are continuous discussions about applying Hero's Journey to data stories. However, so far, little systematic and practical guidance on how to create a data video for a specific story type like the Hero's Journey, as well as how to manipulate its sound and visual designs simultaneously. To fulfill this gap, we first identified 48 data videos aligned with the Hero's Journey as the common storytelling from 109 high-quality data videos. Then, we examined how existing practices apply Hero's Journey for creating data videos. We coded the 48 data videos in terms of the narrative stages, sound design, and visual design according to the Hero's Journey structure. Based on our findings, we proposed a design space to provide practical guidance on the narrative, visual, and sound custom design for different narrative segments of the hero's journey (i.e., Departure, Initiation, Return) through data video creation. To validate our proposed design space, we conducted a user study where 20 participants were invited to design data videos with and without our design space guidance, which was evaluated by two experts. Results show that our design space provides useful and practical guidance for data storytellers effectively creating data videos with the Hero's Journey.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Guangzhou, China"],"email":"zwei302@connect.hkust-gz.edu.cn","is_corresponding":true,"name":"Zheng Wei"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"xxubq@connect.ust.hk","is_corresponding":false,"name":"Xian Xu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zheng Wei"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1333","time_end":"","time_stamp":"","time_start":"","title":"Telling Data Stories with the Hero\u2019s Journey: Design Guidance for Creating Data Videos","uid":"v-full-1333","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1342":{"abstract":"Genomics experts rely on visualization to extract and share insights from complex and large-scale datasets. Beyond off-the-shelf tools for data exploration, there is an increasing need for platforms that aid experts in authoring customized visualizations for both exploration and communication of insights. A variety of interactive techniques have been proposed for authoring data visualizations, such as template editing, shelf configuration, natural language input, and code editors. However, it remains unclear how genomics experts create visualizations and which techniques best support their visualization tasks and needs. To address this gap, we conducted two user studies with genomics researchers: (1) semi-structured interviews (n=20) to identify the tasks, user contexts, and current visualization authoring techniques and (2) an exploratory study (n=13) using visual probes to elicit users\u2019 intents and desired techniques when creating visualizations. Our contributions include (1) a characterization of how visualization authoring is currently utilized in genomics visualization, identifying limitations and benefits in light of common criteria for authoring tools, and (2) generalizable and actionable design implications for genomics visualization authoring tools based on our findings on task- and user-specific usefulness of authoring techniques.","accessible_pdf":false,"authors":[{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.v.d.brandt@tue.nl","is_corresponding":true,"name":"Astrid van den Brandt"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"sehi_lyi@hms.harvard.edu","is_corresponding":false,"name":"Sehi L'Yi"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"huyen_nguyen@hms.harvard.edu","is_corresponding":false,"name":"Huyen N. Nguyen"},{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.vilanova@tue.nl","is_corresponding":false,"name":"Anna Vilanova"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Astrid van den Brandt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1342","time_end":"","time_stamp":"","time_start":"","title":"Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks","uid":"v-full-1342","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1351":{"abstract":"As basketball\u2019s popularity surges, fans often find themselves confused and overwhelmed by the rapid game pace and complexity. Basketball tactics, involving a complex series of actions, require substantial knowledge to be fully understood. This complexity leads to a need for additional information and explanation, which can distract fans from the game. To tackle these challenges, we present Sportify, a Visual Question Answering system that integrates narratives and embedded visualization for demystifying basketball tactical questions, aiding fans in understanding various game aspects. We propose three novel action visualizations (i.e., Pass, Cut, and Screen) to demonstrate critical action sequences. To explain the reasoning and logic behind players\u2019 actions, we leverage a large-language model (LLM) to generate narratives. We adopt a storytelling approach for complex scenarios from both first and third-person perspectives, integrating action visualizations. We evaluated Sportify with basketball fans to investigate its impact on understanding of tactics, and how different personal perspectives of narratives impact the understanding of complex tactic with action visualizations. Our evaluation with basketball fans demonstrates Sportify\u2019s capability to deepen tactical insights and amplify the viewing experience. Furthermore, third-person narration assists people in getting in-depth game explanations while first-person narration enhances fans\u2019 game engagement.","accessible_pdf":false,"authors":[{"affiliations":["Harvard University, Allston, United States"],"email":"chungyi347@gmail.com","is_corresponding":true,"name":"Chunggi Lee"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"mlin@g.harvard.edu","is_corresponding":false,"name":"Tica Lin"},{"affiliations":["University of Minnesota-Twin Cities, Minneapolis, United States"],"email":"ztchen@umn.edu","is_corresponding":false,"name":"Chen Zhu-Tian"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"pfister@seas.harvard.edu","is_corresponding":false,"name":"Hanspeter Pfister"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chunggi Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1351","time_end":"","time_stamp":"","time_start":"","title":"Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video","uid":"v-full-1351","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1363":{"abstract":"Data visualization aids in making data analysis more intuitive and in-depth, with widespread applications in fields such as biology, finance, and medicine. For massive and continuously growing streaming time series data, these data are typically visualized in the form of line charts, but the data transmission puts significant pressure on the network, leading to visualization lag or even fail to render completely. This paper proposes a universal sampling algorithm FPCS, which retains feature points from continuously received streaming time series data, compensates for the frequent fluctuating feature points, and aims to achieve efficient visualization. This algorithm bridges the gap in sampling for streaming time series data. The algorithm has several advantages: (1) It optimizes the sampling results by compensating for fewer feature points, retaining the visualization features of the original data very well, ensuring high-quality sampled data; (2) The execution time is the shortest compared to similar existing algorithms; (3) It has an almost negligible space overhead; (4) The data sampling process does not depend on the overall data; (5) This algorithm can be applied to infinite streaming data and finite static data.","accessible_pdf":false,"authors":[{"affiliations":["China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China"],"email":"3271961659@qq.com","is_corresponding":true,"name":"Hongyan Li"},{"affiliations":["China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China"],"email":"ustcboy@outlook.com","is_corresponding":false,"name":"Bo Yang"},{"affiliations":["China Nanhu Academy of Electronics and Information Technology, Jiaxing, China"],"email":"caiyansong@cnaeit.com","is_corresponding":false,"name":"Yansong Chua"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hongyan Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1363","time_end":"","time_stamp":"","time_start":"","title":"FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data","uid":"v-full-1363","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1368":{"abstract":"Synthetic Lethal (SL) relationships, although rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there remains a persistent need among domain experts for interpretive paths and mechanism explorations that better harmonize with domain-specific knowledge, particularly due to the significant costs involved in experimentation. To address this gap, we propose an iterative Human-AI collaborative framework comprising two key components: 1)Human-Engaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2)Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids domain experts in organizing and comparing prediction results and interpretive paths across different granularities, thereby uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, thereby enhancing expert involvement and intervention to build trust. This framework, facilitated by SLInterpreter, ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. Subsequently, we evaluate the efficacy of the framework through a case study and expert interviews.","accessible_pdf":false,"authors":[{"affiliations":["Shanghaitech University, Shanghai, China"],"email":"jianghr2023@shanghaitech.edu.cn","is_corresponding":true,"name":"Haoran Jiang"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"shishh2023@shanghaitech.edu.cn","is_corresponding":false,"name":"Shaohan Shi"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"zhangshh2@shanghaitech.edu.cn","is_corresponding":false,"name":"Shuhao Zhang"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"zhengjie@shanghaitech.edu.cn","is_corresponding":false,"name":"Jie Zheng"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haoran Jiang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1368","time_end":"","time_stamp":"","time_start":"","title":"SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction","uid":"v-full-1368","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1391":{"abstract":"In volume visualization, visualization synthesis has attracted much attention due to its ability to generate novel visualizations without following the conventional rendering pipeline. However, existing solutions based on generative adversarial networks often require many training images and take significant training time. Still, issues such as low quality, consistency, and flexibility persist. This paper introduces StyleRF-VolVis, an innovative style transfer framework for expressive volume visualization (VolVis) via neural radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its ability to accurately separate the underlying scene geometry (i.e., content) and color appearance (i.e., style), conveniently modify color, opacity, and lighting of the original rendering while maintaining visual content consistency across the views, and effectively transfer arbitrary styles from reference images to the reconstructed 3D scene. To achieve these, we design a base NeRF model for scene geometry extraction, a palette color network to classify regions of the radiance field for photorealistic editing, and an unrestricted color network to lift the color palette constraint via knowledge distillation for non-photorealistic editing. We demonstrate the superior quality, consistency, and flexibility of StyleRF-VolVis by experimenting with various volume rendering scenes and reference images and comparing StyleRF-VolVis against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF and SNeRF) style rendering solutions.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"ktang2@nd.edu","is_corresponding":true,"name":"Kaiyuan Tang"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kaiyuan Tang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1391","time_end":"","time_stamp":"","time_start":"","title":"StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization","uid":"v-full-1391","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1393":{"abstract":"This paper discusses challenges and design strategies in responsive design for thematic maps in information visualization. Thematic maps pose a number of unique challenges for responsiveness, such as inflexible aspect ratios that do not easily adapt to varying screen dimensions, or densely clustered visual elements in urban areas becoming illegible at smaller scales. However, design guidance on how to best address these issues is currently lacking. We conducted design sessions with eight professional designers and developers of web-based thematic maps for information visualization. Participants were asked to redesign a given map for various screen sizes and aspect ratios and to describe their reasoning for when and how they adapted the design. We report general observations of practitioners\u2019 motivations, decision-making processes, and personal design frameworks. We then derive seven challenges commonly encountered in responsive map design, and 17 strategies to address them, such as repositioning elements, segmenting the map, or using alternative visualizations. We compile these challenges and strategies into an illustrated cheat sheet targeted at anyone designing or learning to design responsive maps. The cheat sheet is available online: https://responsive-vis.github.io/map-cheat-sheet.","accessible_pdf":false,"authors":[{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"sarah.schoettler@ed.ac.uk","is_corresponding":true,"name":"Sarah Sch\u00f6ttler"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"uhinrich@ed.ac.uk","is_corresponding":false,"name":"Uta Hinrichs"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sarah Sch\u00f6ttler"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1393","time_end":"","time_stamp":"","time_start":"","title":"Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts","uid":"v-full-1393","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1394":{"abstract":"This paper presents discursive patinas, a technique to visualize discussions onto data visualizations, inspired by how people leave traces in the physical world. While data visualizations are widely discussed in online communities and social media, comments tend to be displayed separately from the visualization. We lack ways to relate these discussions to the content of the visualization, e.g., to situate comments, explain visual patterns, or question assumptions. In our visualization annotation interface, users can designate areas within the visualization to, e.g., highlight specific visual marks (anchors), attach textual comments, and add category labels, likes, and replies. By coloring and styling these designated areas, a meta visualization emerges, showing what and where people comment and annotate. These patinas show regions of heavy discussions, recent commenting activity, and the distribution of questions, suggestions, or personal stories. To study how people use anchors to discuss visualizations and understand if and how information in patinas influence people's understanding of the discussion, we ran workshops with 90 participants including students, domain experts, and visualization researchers. Our results show that discursive patinas improve the ability to navigate discussions and guide people to comments that help understand, contextualize, or scrutinize the visualization. We discuss the potential of the technique to support discursive engagements, including critical readings of visualizations, design feedback, and feminist approaches to data visualization.","accessible_pdf":false,"authors":[{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom","Potsdam University of Applied Sciences, Potsdam, Germany"],"email":"tobias.kauer@fh-potsdam.de","is_corresponding":true,"name":"Tobias Kauer"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"derya.akbaba@liu.se","is_corresponding":false,"name":"Derya Akbaba"},{"affiliations":["University of Applied Sciences Potsdam, Potsdam, Germany"],"email":"doerk@fh-potsdam.de","is_corresponding":false,"name":"Marian D\u00f6rk"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias Kauer"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1394","time_end":"","time_stamp":"","time_start":"","title":"Discursive Patinas: Anchoring Discussions in Data Visualizations","uid":"v-full-1394","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1395":{"abstract":"Onboarding a user to a visualization dashboard entails explaining its various components, including the chart types used, the data loaded, and the interactions provided. Authoring such an onboarding experience is time-consuming and requires significant knowledge, and little guidance exists on how best to do this. End-users being onboarded to a new dashboard can be either confused and overwhelmed, or disinterested and disengaged, depending on the user\u2019s expertise. We propose interactive dashboard tours (d-tours) as semi-automated onboarding experiences for variable user expertise that preserve the user\u2019s agency, interest, and engagement. Our interactive tours concept draws from open-world game design to give the user freedom in choosing their path in the onboarding. We have implemented the concept in a tool called D-TOUR PROTOTYPE that allows authors to craft custom and interactive dashboard tours from scratch or using automatic templates. Automatically generated tours can still be customized to use different media (such as video, audio, or highlighting) or new narratives to produce a tailored onboarding experience for individual users or groups. We demonstrate the usefulness of interactive dashboard tours through use cases and expert interviews. The evaluation shows that the authors find the automation in the DTour prototype helpful and time-saving and the users find it engaging and intuitive. This paper and all supplemental materials are available at \\url{https://osf.io/6fbjp/}.","accessible_pdf":false,"authors":[{"affiliations":["Pro2Future GmbH, Linz, Austria","Johannes Kepler University, Linz, Austria"],"email":"vaishali.dhanoa@pro2future.at","is_corresponding":true,"name":"Vaishali Dhanoa"},{"affiliations":["Johannes Kepler University, Linz, Austria"],"email":"andreas.hinterreiter@jku.at","is_corresponding":false,"name":"Andreas Hinterreiter"},{"affiliations":["Johannes Kepler University, Linz, Austria"],"email":"vanessa.fediuk@jku.at","is_corresponding":false,"name":"Vanessa Fediuk"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"},{"affiliations":["Institute of Visual Computing "," Human-Centered Technology, Vienna, Austria"],"email":"groeller@cg.tuwien.ac.at","is_corresponding":false,"name":"Eduard Gr\u00f6ller"},{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"marc.streit@jku.at","is_corresponding":false,"name":"Marc Streit"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Vaishali Dhanoa"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1395","time_end":"","time_stamp":"","time_start":"","title":"D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding","uid":"v-full-1395","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1414":{"abstract":"Visualization designers often rely on examples to explore the space of possible designs, yet we have little insight into how examples shape data visualization design outcomes. While the effects of examples have been studied in other disciplines, such as web design or engineering, the results are not readily applicable to visualization design due to inconsistencies in findings and challenges unique to visualization design. Towards bridging this gap, we conduct an exploratory experiment involving 32 data visualization designers focusing on the influence of five factors (timing, quantity, diversity, data topic similarity, and data schema similarity) on objectively measurable design outcomes (e.g., numbers of designs and idea transfers). Our quantitative analysis shows that when examples are introduced after initial brainstorming, designers curate examples with topics less similar to the dataset they are working on and produce more designs with a high variation in visualization components. Also, designers copy more ideas from examples with higher data schema similarities. Our qualitative analysis of participants\u2019 thought processes provides insights into why designers incorporate examples into their designs, revealing potential factors that have not been previously investigated. Finally, we discuss how our results inform future work on quantifying designs, improving measures of effectiveness, and supporting example-based visualization design. All supplementary materials are available at https://osf.io/sbp2k/?view_only=ca14af497f5845a0b1b2c616699fefc5","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"hbako@umd.edu","is_corresponding":true,"name":"Hannah K. Bako"},{"affiliations":["The University of Texas at Austin, Austin, United States"],"email":"xinyi.liu@utexas.edu","is_corresponding":false,"name":"Xinyi Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"gko1@terpmail.umd.edu","is_corresponding":false,"name":"Grace Ko"},{"affiliations":["Human Data Interaction Lab, College Park, United States"],"email":"hsong02@cs.umd.edu","is_corresponding":false,"name":"Hyemi Song"},{"affiliations":["University of Washington, Seattle, United States"],"email":"leibatt@cs.washington.edu","is_corresponding":false,"name":"Leilani Battle"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hannah K. Bako"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1414","time_end":"","time_stamp":"","time_start":"","title":"Unveiling How Examples Shape Data Visualization Design Outcomes","uid":"v-full-1414","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1416":{"abstract":"Various data visualization downstream applications such as reverse engineering and interactive authoring require a vocabulary that describes the structure of visualization scenes and the procedure to manipulate them. A few scene abstractions have been proposed, but they are restricted to specific applications for a limited set of visualization types. A unified and expressive model of data visualization scenes for different downstream applications has been missing. To fill this gap, we present Manipulable Semantic Components (MSC), a computational representation of data visualization scenes, to support applications in scene understanding and augmentation. MSC consists of two parts: a unified object model describing the structure of a visualization scene in terms of semantic components, and a set of operations to generate and modify the scene components. We demonstrate the benefits of MSC in three applications: visualization authoring, visualization deconstruction and reuse, and animation specification.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":true,"name":"Zhicheng Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"cchen24@umd.edu","is_corresponding":false,"name":"Chen Chen"},{"affiliations":["University of Maryland, College Park, United States"],"email":"hookerj100@gmail.com","is_corresponding":false,"name":"John Hooker"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zhicheng Liu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1416","time_end":"","time_stamp":"","time_start":"","title":"Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes","uid":"v-full-1416","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1422":{"abstract":"Visualization items\u2014factual questions about visualizations that ask viewers to accomplish visualization tasks\u2014are regularly used in the field of information visualization as educational and evaluative materials. For example, researchers of visualization literacy require large, diverse banks of items to conduct studies where the same skill is measured repeatedly on the same participants. Yet, generating a large number of high-quality, diverse items requires significant time and expertise. To address the critical need for a large number of diverse visualization items in education and research, this paper investigates the potential for large language models (LLMs) to automate the generation of multiple-choice visualization items. Through an iterative design process, we develop an LLM-based pipeline, the VILA (Visualization Items Generated by Large LAnguage Models) pipeline, for efficiently generating visualization items that measure people\u2019s ability to accomplish visualization tasks. We use the VILA pipeline to generate 1,404 candidate items across 12 chart types and 13 visualization tasks. In collaboration with 11 visualization experts, we develop an evaluation rulebook which we then use to rate the quality of all candidate items. The result is a final bank, the VILA bank, of \u223c1,100 items. From this evaluation, we also identify and classify current limitations of LLMs in generating visualization items, and discuss the role of human oversight in ensuring quality. In addition, we demonstrate an application of our work by creating a visualization literacy test, VILA-VLAT, which measures people\u2019s ability to complete a diverse set of tasks on various types of visualizations; to show the potential of this application, we assess the convergent validity of VILA-VLAT by comparing it to the existing test VLAT via an online study (R = 0.70). Lastly, we discuss the application areas of the VILA pipeline and the VILA bank and provide practical recommendations for their use. All supplemental materials are available at https://osf.io/ysrhq/?view_only=e31b3ddf216e4351bb37bcedf744e9d6.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"yuancui2025@u.northwestern.edu","is_corresponding":true,"name":"Yuan Cui"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"wanqian.ge@northwestern.edu","is_corresponding":false,"name":"Lily W. Ge"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"yding5@wpi.edu","is_corresponding":false,"name":"Yiren Ding"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ltharrison@wpi.edu","is_corresponding":false,"name":"Lane Harrison"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"fumeng.p.yang@gmail.com","is_corresponding":false,"name":"Fumeng Yang"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuan Cui"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1422","time_end":"","time_stamp":"","time_start":"","title":"Promises and Pitfalls: Using Large Language Models to Generate Visualization Items","uid":"v-full-1422","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1425":{"abstract":"Comics have been shown to be an effective method for sequential data-driven storytelling, especially for dynamic graphs that change over time. However, manually creating a data-driven comic for a dynamic graph is currently time-consuming, complex, and error-prone. In this paper, we propose DG Comics, a novel comic authoring tool for dynamic graphs that allows users to semi-automatically build the comic and annotate it. The tool uses a hierarchical clustering algorithm that we newly developed for segmenting consecutive snapshots of the dynamic graph while preserving their chronological order. It also provides rich information on both individuals and communities extracted from dynamic graphs in multiple views, where users can explore dynamic graphs and choose what to tell in comics. For evaluation, we provide an example and report results from a user study and expert review.","accessible_pdf":false,"authors":[{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"joohee@unist.ac.kr","is_corresponding":true,"name":"Joohee Kim"},{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"gusdnr0916@unist.ac.kr","is_corresponding":false,"name":"Hyunwook Lee"},{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"ducnm@unist.ac.kr","is_corresponding":false,"name":"Duc M. Nguyen"},{"affiliations":["Australian National University, Canberra, Australia"],"email":"minjeong.shin@anu.edu.au","is_corresponding":false,"name":"Minjeong Shin"},{"affiliations":["IBM Research, Cambridge, United States"],"email":"bumchul.kwon@us.ibm.com","is_corresponding":false,"name":"Bum Chul Kwon"},{"affiliations":["UNIST, Ulsan, Korea, Republic of"],"email":"sako@unist.ac.kr","is_corresponding":false,"name":"Sungahn Ko"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Joohee Kim"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1425","time_end":"","time_stamp":"","time_start":"","title":"DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs","uid":"v-full-1425","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1427":{"abstract":"Numerical simulation serves as a cornerstone in scientific modeling, yet the process of fine-tuning simulation parameters poses significant challenges. Conventionally, parameter adjustment relies on extensive numerical simulations, data analysis, and expert insights, resulting in substantial computational costs and low efficiency. The emergence of deep learning in recent years has provided promising avenues for more efficient exploration of parameter spaces. However, existing approaches often lack intuitive methods for precise parameter adjustment and optimization. To tackle these challenges, we introduce ParamsDrag, a model that facilitates parameter space exploration through direct interaction with visualizations. Inspired by DragGAN, our ParamsDrag model operates in three steps. First, the generative component of ParamsDrag generates visualizations based on the input simulation parameters. Second, by directly dragging structure-related features in the visualizations, users can intuitively understand the controlling effect of different parameters. Third, with the understanding from the earlier step, users can steer ParamsDrag to produce dynamic visual outcomes. Through experiments conducted on real-world simulations and comparisons with state-of-the-art deep learning based approaches, we demonstrate the efficacy of our solution.","accessible_pdf":false,"authors":[{"affiliations":["Computer Network Information Center, Chinese Academy of Sciences, Beijing, China","University of Chinese Academy of Sciences, Beijing, China"],"email":"liguan@sccas.cn","is_corresponding":true,"name":"Guan Li"},{"affiliations":["Beijing Forestry University, Beijing, China"],"email":"leo_edumail@163.com","is_corresponding":false,"name":"Yang Liu"},{"affiliations":["Computer Network Information Center, Chinese Academy of Sciences, Beijing, China"],"email":"sgh@sccas.cn","is_corresponding":false,"name":"Guihua Shan"},{"affiliations":["Chinese Academy of Sciences, Beijing, China"],"email":"chengshiyu@cnic.cn","is_corresponding":false,"name":"Shiyu Cheng"},{"affiliations":["Beijing Forestry University, Beijing, China"],"email":"weiqun.cao@126.com","is_corresponding":false,"name":"Weiqun Cao"},{"affiliations":["Visa Research, Palo Alto, United States"],"email":"junpeng.wang.nk@gmail.com","is_corresponding":false,"name":"Junpeng Wang"},{"affiliations":["National Taiwan Normal University, Taipei City, Taiwan"],"email":"caseywang777@gmail.com","is_corresponding":false,"name":"Ko-Chih Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Guan Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1427","time_end":"","time_stamp":"","time_start":"","title":"ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging","uid":"v-full-1427","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1438":{"abstract":"Differential privacy ensures the security of individual privacy but poses challenges to data exploration processes because the limited privacy budget incapacitates the flexibility of exploration and the noisy feedback of data requests leads to confusing uncertainty. In this study, we take the lead in describing corresponding exploration scenarios, including underlying requirements and available exploration strategies. To facilitate practical applications, we propose a visual analysis approach to the formulation of exploration strategies. Our approach applies a reinforcement learning model to provide diverse suggestions for exploration strategies according to the exploration intent of users. A novel visual design for representing uncertainty in correlation patterns is integrated into our prototype system to support the proposed approach. Finally, we implemented a user study and two case studies. The results of these studies verified that our approach can help develop strategies that satisfy the exploration intent of users.","accessible_pdf":false,"authors":[{"affiliations":["Nankai University, Tianjin, China"],"email":"wangxumeng@nankai.edu.cn","is_corresponding":true,"name":"Xumeng Wang"},{"affiliations":["Nankai University, Tianjin, China"],"email":"jiaoshuangcheng@mail.nankai.edu.cn","is_corresponding":false,"name":"Shuangcheng Jiao"},{"affiliations":["Arizona State University, Tempe, United States"],"email":"cbryan16@asu.edu","is_corresponding":false,"name":"Chris Bryan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xumeng Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1438","time_end":"","time_stamp":"","time_start":"","title":"Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy","uid":"v-full-1438","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1446":{"abstract":"We are currently witnessing an increase in web-based, data-driven initiatives that explain complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. Many of these projects call themselves \"atlases\", a term that historically referred to collections of maps or scientific illustrations. To answer the question of what makes a \"visualization atlas\", we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of visualization atlases as an emerging format to present complex topics in a holistic, data-driven, and curated way through visualization, (2) a set of design patterns and design dimensions that led to (3) defining 5 visualization atlas genres, and (4) insights into the atlas creation from interviews. We found that visualization atlases are unique in that they combine exploratory visualization with narrative elements from data-driven storytelling and structured navigation mechanisms. They can act as a reference, communication or discovery tools targeting a wide range of audiences with different levels of domain knowledge. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases.","accessible_pdf":false,"authors":[{"affiliations":["The University of Edinburgh, Edinburgh, United Kingdom"],"email":"jinrui.w@outlook.com","is_corresponding":true,"name":"Jinrui Wang"},{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"uhinrich@ed.ac.uk","is_corresponding":false,"name":"Uta Hinrichs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jinrui Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1446","time_end":"","time_stamp":"","time_start":"","title":"Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration","uid":"v-full-1446","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1451":{"abstract":"We present a systematic review, an empirical study, and a first set of considerations for designing visualizations in motion, derived from a concrete scenario in which these visualizations were used to support a primary task. In practice, when viewers are confronted with embedded visualizations, they often have to focus on a primary task and can only quickly glance at a visualization showing rich, often dynamically updated, information. As such, the visualizations must be designed so as not to distract from the primary task, while at the same time being readable and useful for aiding the primary task. For example, in games, players who are engaged in a battle have to look at their enemies but also read the remaining health of their own game character from the health bar over their character's head. Many trade-offs are possible in the design of embedded visualizations in such dynamic scenarios, which we explore in-depth in this paper with a focus on user experience. We use video games as an example of an application context with a rich existing set of visualizations in motion. We begin our work with a systematic review of in-game visualizations in motion. Next, we conduct an empirical user study to investigate how different embedded visualizations in motion designs impact user experience. We conclude with a set of considerations and trade-offs for designing visualizations in motion more broadly as derived from what we learned about video games. All supplemental materials of this paper are available at osf.io/3v8wm/.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"yaolijie0219@gmail.com","is_corresponding":true,"name":"Lijie Yao"},{"affiliations":["Univerisit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"federicabucchieri@gmail.com","is_corresponding":false,"name":"Federica Bucchieri"},{"affiliations":["Carleton University, Ottawa, Canada"],"email":"dieselfish@gmail.com","is_corresponding":false,"name":"Victoria McArthur"},{"affiliations":["LISN, Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"anastasia.bezerianos@universite-paris-saclay.fr","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lijie Yao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1451","time_end":"","time_stamp":"","time_start":"","title":"User Experience of Visualizations in Motion: A Case Study and Design Considerations","uid":"v-full-1451","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1461":{"abstract":"This paper presents a practical approach for the optimization of topological simplification, a central pre-processing step for the analysis and visualization of scalar data. Given an input scalar field f and a set of \u201csignal\u201d persistence pairs to maintain, our approaches produces an output field g that is close to f and which optimizes (i) the cancellation of \u201cnon-signal\u201d pairs, while (ii) preserving the \u201csignal\u201d pairs. In contrast to pre-existing simplification approaches, our method is not restricted to persistence pairs involving extrema and can thus address a larger class of topological features, in particular saddle pairs in three-dimensional scalar data. Our approach leverages recent generic persistence optimization frameworks and extends them with tailored accelerations specific to the problem of topological simplification. Extensive experiments report substantial accelerations over these frameworks, thereby making topological simplification optimization practical for real-life datasets. Our work enables a direct visualization and analysis of the topologically simplified data, e.g., via isosurfaces of simplified topology (fewer components and handles). We apply our approach to the extraction of prominent filament structures in three-dimensional data. Specifically, we show that our pre-simplification of the data leads to practical improvements over standard topological techniques for removing filament loops. We also show how our framework can be used to repair genus defects in surface processing. Finally, we provide a C++ implementation for reproducibility purposes.","accessible_pdf":false,"authors":[{"affiliations":["CNRS, Paris, France","SORBONNE UNIVERSITE, Paris, France"],"email":"mohamed.kissi@lip6.fr","is_corresponding":true,"name":"Mohamed KISSI"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"mathieu.pont@lip6.fr","is_corresponding":false,"name":"Mathieu Pont"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"josh@cs.arizona.edu","is_corresponding":false,"name":"Joshua A Levine"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mohamed KISSI"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1461","time_end":"","time_stamp":"","time_start":"","title":"A Practical Solver for Scalar Data Topological Simplification","uid":"v-full-1461","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1472":{"abstract":"Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, an approach for extracting and modeling visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines---DracoGPT-Rank and DracoGPT-Recommend---to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT models the preferences expressed by LLMs well, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantively diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and serve as a reliable and cost-effective stand-in for LLMs.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"wwill@cs.washington.edu","is_corresponding":true,"name":"Huichen Will Wang"},{"affiliations":["University of Washington, Seattle, United States"],"email":"mgord@cs.stanford.edu","is_corresponding":false,"name":"Mitchell L. Gordon"},{"affiliations":["University of Washington, Seattle, United States"],"email":"leibatt@cs.washington.edu","is_corresponding":false,"name":"Leilani Battle"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Huichen Will Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1472","time_end":"","time_stamp":"","time_start":"","title":"DracoGPT: Extracting Visualization Design Preferences from Large Language Models","uid":"v-full-1472","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1474":{"abstract":"Recent advancements in Large Language Models (LLMs) and Prompt Engineering have made chatbot customization more accessible, significantly reducing barriers to tasks that previously required programming skills. However, prompt evaluation, especially at the dataset scale, remains complex due to the need to assess prompts across thousands of test instances within a dataset. Our study, based on a comprehensive literature review and pilot study, summarized five critical challenges in prompt evaluation. In response, we introduce a feature-oriented workflow for systematic prompt evaluation, focusing on text summarization. Our workflow advocates feature metrics such as complexity, formality, or naturalness, instead of using traditional quality metrics like ROUGE. This design choice enables a more user-friendly evaluation of prompts, as it guides users in sorting through the ambiguity inherent in natural language. To support this workflow, we introduce Awesum, a visual analytics system that facilitates identifying optimal prompt refinements through interactive visualizations, featuring a novel Prompt Comparator design that employs a BubbleSet-inspired design enhanced by dimensionality reduction techniques. We evaluate the effectiveness and general applicability of the system with practitioners from various domains and found that (1) our design helps overcome the learning curve for non-technical people to conduct a systematic evaluation, and (2) our feature-oriented workflow has the potential to generalize to other NLG and image-generation tasks. For future works, we advocate moving towards feature-oriented evaluation of LLM prompts and discuss unsolved challenges in terms of human-agent interaction.","accessible_pdf":false,"authors":[{"affiliations":["University of California Davis, Davis, United States"],"email":"ytlee@ucdavis.edu","is_corresponding":true,"name":"Sam Yu-Te Lee"},{"affiliations":["University of California, Davis, Davis, United States"],"email":"abahukhandi@ucdavis.edu","is_corresponding":false,"name":"Aryaman Bahukhandi"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"dyuliu@ucdavis.edu","is_corresponding":false,"name":"Dongyu Liu"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"ma@cs.ucdavis.edu","is_corresponding":false,"name":"Kwan-Liu Ma"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sam Yu-Te Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1474","time_end":"","time_stamp":"","time_start":"","title":"Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts","uid":"v-full-1474","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1480":{"abstract":"We propose the notion of Attention-aware Visualizations (AAVs) that track the user's perception of a visual representation over time and feed this information back to the visualization.This idea is particularly useful for ubiquitous and immersive analytics where knowing which embedded visualizations the user is looking at can be used to make visualizations react appropriately to the user's attention: for example, by highlighting data the user has not yet seen. We can separate the approach into three components: (1) measuring the user's gaze on a visualization and its parts; (2) tracking the user's attention over time; and (3) reactively modifying the visual representation based on the current attention metric. In this paper, we present two separate implementations of AAV: a 2D numeric integration of attention for web-based visualizations that can use an embodied eye-tracker to capture the user's gaze, and a 3D implementation that uses the stencil buffer to track the visibility of each individual mark in a visualization. Both methods provide similar mechanisms for accumulating attention over time and changing the appearance of marks in response. We also present results from a controlled laboratory experiment studying different visual feedback mechanisms for attention.","accessible_pdf":false,"authors":[{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"arvind@cs.au.dk","is_corresponding":true,"name":"Arvind Srinivasan"},{"affiliations":["Aarhus University, Aarhus N, Denmark"],"email":"johannes@ellemose.eu","is_corresponding":false,"name":"Johannes Ellemose"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.butcher@bangor.ac.uk","is_corresponding":false,"name":"Peter W. S. Butcher"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.ritsos@bangor.ac.uk","is_corresponding":false,"name":"Panagiotis D. Ritsos"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arvind Srinivasan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1480","time_end":"","time_stamp":"","time_start":"","title":"Attention-Aware Visualization: Tracking and Responding to User Perception Over Time","uid":"v-full-1480","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1483":{"abstract":"Egocentric networks, often visualized as node-link diagrams, portray the complex relationship (link) dynamics between an entity (node) and others. However, common analytics tasks are multifaceted, encompassing interactions among four key aspects: strength, function, structure, and content. Current node-link visualization designs may fall short, focusing narrowly on certain aspects and neglecting the holistic, dynamic nature of egocentric networks. To bridge this gap, we introduce SpreadLine, a novel visualization framework designed to enable the visual exploration of egocentric networks from these four aspects at the microscopic level. Leveraging the intuitive appeal of storyline visualizations, SpreadLine adopts a storyline-based design to represent entities and their evolving relationships. We further encode essential topological information in the layout and condense the contextual information in a metro map metaphor, allowing for a more engaging and effective way to explore temporal and attribute-based information. To guide our work, with a thorough review of pertinent literature, we have distilled a task taxonomy that addresses the analytical needs specific to egocentric network exploration. Acknowledging the diverse analytical requirements of users, SpreadLine offers customizable encodings to enable users to tailor the framework for their tasks. We demonstrate the efficacy and general applicability of SpreadLine through three diverse real-world case studies and a usability study.","accessible_pdf":false,"authors":[{"affiliations":["University of California, Davis, Davis, United States"],"email":"yskuo@ucdavis.edu","is_corresponding":true,"name":"Yun-Hsin Kuo"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"dyuliu@ucdavis.edu","is_corresponding":false,"name":"Dongyu Liu"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"ma@cs.ucdavis.edu","is_corresponding":false,"name":"Kwan-Liu Ma"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yun-Hsin Kuo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1483","time_end":"","time_stamp":"","time_start":"","title":"SpreadLine: Visualizing Egocentric Dynamic Influence","uid":"v-full-1483","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1487":{"abstract":"Referential gestures, or as termed in linguistics, {\\em deixis}, are an essential part of communication around data visualizations. Despite their importance, such gestures are often overlooked when documenting data analysis meetings. Transcripts, for instance, fail to capture gestures, and video recordings may not adequately capture or emphasize them. We introduce a novel method for documenting collaborative data meetings that treats deixis as a first-class citizen. Our proposed framework captures cursor-based gestural data along with audio and converts them into interactive documents. The framework leverages a large language model to identify word correspondences with gestures. These identified references are used to create context-based annotations in the resulting interactive document. We assess the effectiveness of our proposed method through a user study, finding that participants preferred our automated interactive documentation over recordings, transcripts, and manual note-taking. Furthermore, we derive a preliminary taxonomy of cursor-based deictic gestures from participant actions during the study. This taxonomy offers further opportunities for better utilizing cursor-based deixis in collaborative data analysis scenarios.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"hatch.on27@gmail.com","is_corresponding":true,"name":"Chang Han"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chang Han"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1487","time_end":"","time_stamp":"","time_start":"","title":"A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations","uid":"v-full-1487","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1488":{"abstract":"A year ago, we submitted an IEEE VIS paper entitled \u201cSwaying the Public? Impacts of Election Forecast Visualizations on Emotion, Trust, and Intention in the 2022 U.S. Midterms\u201d [68], which was later bestowed with the honor of a best paper award. Yet, studying such a complex phenomenon required us to explore many more design paths than we could count, and certainly more than we could document in a single paper. This paper, then, is the unwritten prequel\u2014the backstory. It chronicles our journey from a simple idea\u2014to study visualizations for election forecasts\u2014through obstacles such as developing meaningfully different, easy-to-understand forecast visualizations, crafting professional-looking forecasts, and grappling with how to study perceptions of the forecasts before, during, and after the 2022 U.S. midterm elections. Our backstory began with developing a design space for two-party election forecasts, de\ufb01ning dimensions such as data transformations, visual channels, layouts, and types of animated narratives. We then qualitatively evaluated ten representative prototypes in this design space through interviews with 13 participants. The interviews yielded invaluable insights into how people interpret uncertainty visualizations and reason about probability in a U.S. election context, such as confounding win probability with vote share and erroneously forming connections between concrete visual representations (like dots) and real-world entities (like votes). Informed by these insights, we revised our prototypes to address ambiguity in interpreting visual encodings, particularly through the inclusion of extensive annotations. As we navigated these design paths, we contributed a design space and insights that may help others when designing uncertainty visualizations. We also hope that our design lessons and research process can inspire the research community when exploring topics related to designing visualizations for the general public.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"fumeng.p.yang@gmail.com","is_corresponding":true,"name":"Fumeng Yang"},{"affiliations":["Northwestern University, Evanston, United States","Northwestern University, Evanston, United States"],"email":"mandicai2028@u.northwestern.edu","is_corresponding":false,"name":"Mandi Cai"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"chloemortenson2026@u.northwestern.edu","is_corresponding":false,"name":"Chloe Rose Mortenson"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"hoda@u.northwestern.edu","is_corresponding":false,"name":"Hoda Fakhari"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"aysedlokmanoglu@gmail.com","is_corresponding":false,"name":"Ayse Deniz Lokmanoglu"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"nicholas.diakopoulos@gmail.com","is_corresponding":false,"name":"Nicholas Diakopoulos"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"erik.nisbet@northwestern.edu","is_corresponding":false,"name":"Erik Nisbet"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fumeng Yang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1488","time_end":"","time_stamp":"","time_start":"","title":"The Backstory to \u201cSwaying the Public\u201d: A Design Chronicle of Election Forecast Visualizations","uid":"v-full-1488","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1489":{"abstract":"Projecting high-dimensional vectors into two dimensions for visualization, known as embedding visualization, facilitates perceptual reasoning and interpretation. Comparison of multiple embedding visualizations drives decision-making in many domains, but conventional comparison methods are limited by a reliance on direct point correspondences. This requirement precludes embedding comparisons without point correspondences, such as two different datasets of annotated images, and fails to capture meaningful higher-level relationships among point groups. To address these shortcomings, we propose a general framework to compare embedding visualizations based on shared class labels rather than individual points. Our approach partitions points into regions corresponding to three key class concepts---confusion, neighborhood, and relative size---to characterize intra- and inter-class relationships. Informed by a preliminary user study, we realize an implementation of our framework using perceptual neighborhood graphs to define these regions and introduce metrics to quantify each concept. We demonstrate the generality of our framework with use cases from machine learning and single-cell biology, highlighting our metrics' ability to surface insightful comparisons across label hierarchies. To assess the effectiveness of our approach, we conducted a user study with five machine learning researchers and six single-cell biologists using an interactive and scalable prototype developed in Python and Rust. Our metrics enable more structured comparison through visual guidance and increased participants\u2019 confidence in their findings.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"trevor_manz@g.harvard.edu","is_corresponding":true,"name":"Trevor Manz"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"f.lekschas@gmail.com","is_corresponding":false,"name":"Fritz Lekschas"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"palmergreene@gmail.com","is_corresponding":false,"name":"Evan Greene"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"greg@ozette.com","is_corresponding":false,"name":"Greg Finak"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Trevor Manz"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1489","time_end":"","time_stamp":"","time_start":"","title":"A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies","uid":"v-full-1489","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1494":{"abstract":"Topological abstractions offer a method to summarize the behavior of vector fields, but computing them robustly can be challenging due to numerical precision issues. One alternative is to represent the vector field using a discrete approach, which constructs a collection of pairs of simplices in the input mesh that satisfies criteria introduced by Forman\u2019s discrete Morse theory. While numerous approaches exist to compute pairs in the restricted case of the gradient of a scalar field, state-of-the-art algorithms for the general case of vector fields require expensive optimization procedures. This paper introduces a fast, novel approach for pairing simplices of two-dimensional, triangulated vector fields that do not vary in time. The key insight of our approach is that we can employ a local evaluation, inspired by the approach used to construct a discrete gradient field, where every cell in a mesh is considered by no more than one of its vertices. Specifically, we observe that for any edge in the input mesh, we can uniquely assign an outward direction of flow. We can further expand this consistent notion of outward flow at each vertex, which corresponds to the concept of a downhill flow in the case of scalar fields. Working with outward flow enables a linear-time algorithm that processes the (outward) neighborhoods of each vertex one-by-one, similar to the approach used for scalar fields. We couple our approach to constructing discrete vector fields with a method to extract, simplify, and visualize topological features. Empirical results on analytic and simulation data demonstrate drastic improvements in running time, produce features similar to the current state-of-the-art, and show the application of simplification to large, complex flows.","accessible_pdf":false,"authors":[{"affiliations":["University of Arizona, Tucson, United States"],"email":"finkent@arizona.edu","is_corresponding":true,"name":"Tanner Finken"},{"affiliations":["Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"josh@cs.arizona.edu","is_corresponding":false,"name":"Joshua A Levine"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tanner Finken"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1494","time_end":"","time_stamp":"","time_start":"","title":"Localized Evaluation for Constructing Discrete Vector Fields","uid":"v-full-1494","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1500":{"abstract":"Haptic feedback provides an essential sensory stimulus crucial for interacting and analyzing three-dimensional spatio-temporal phenomena on surface visualizations. Given its ability to provide enhanced spatial perception and scene maneuverability, virtual reality (VR) catalyzes haptic interactions on surface visualizations. Various interaction modes, encompassing both mid-air and on-surface interactions---with or without the application of assisting force stimuli---have been explored using haptic force feedback devices. In this paper, we evaluate the use of on-surface and assisted on-surface haptic modes of interaction compared to a no-haptic interaction mode. A force-based haptic stylus is used for all three modalities; the on-surface mode uses collision based forces, whereas the assisted on-surface mode is accompanied by an additional snapping force. We conducted a within-subjects user study involving fundamental interaction tasks performed on surface visualizations. Keeping a consistent visual design across all three modes, our study incorporates tasks that require the localization of the highest, lowest, and random points on surfaces; and tasks that focus on brushing curves on surfaces with varying complexity and occlusion levels. Our findings show that participants took almost the same time to brush curves using all the interaction modes. They could draw smoother curves using the on-surface interaction modes compared to the no-haptic mode. However, the assisted on-surface mode provided better accuracy than the on-surface mode. The on-surface mode was slower in point localization, but the accuracy depended on the visual cues and occlusions associated with the tasks. Finally, we discuss participant feedback on using haptic force feedback as a tangible input modality and share takeaways to aid the design of haptics-based tangible interactions for surface visualizations.","accessible_pdf":false,"authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"hamza.afzaal@ucalgary.ca","is_corresponding":true,"name":"Hamza Afzaal"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"ualim@ucalgary.ca","is_corresponding":false,"name":"Usman Alim"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hamza Afzaal"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1500","time_end":"","time_stamp":"","time_start":"","title":"Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations","uid":"v-full-1500","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1502":{"abstract":"Visualization is widely used for exploring personal data, but many visualization authoring systems do not support expressing data in flexible, personal, and organic layouts. Sketching is an accessible tool for experimenting with visualization designs, but formalizing sketched elements into structured data representations is difficult, as modifying hand-drawn glyphs to encode data when available is labour-intensive and error prone. We propose an approach where authors structure their own expressive templates, capturing implicit style as well as explicit data mappings, through sketching a representative visualization for an envisioned or partial dataset. Our approach seeks to support freeform exploration and partial specification, balanced against interactive machine support for specifying the generative procedural rules. We implement this approach in DataGarden, a system designed to support hierarchical data visualizations, and evaluate it with 12 participants in a reproduction study and four experts in a freeform creative task. Participants readily picked up the core idea of template authoring, and the variety of workflows we observed highlight how this process serves design and data ideation as well as visual constraint iteration. We discuss challenges in implementing the design considerations underpinning DataGarden, and illustrate its potential in a gallery of visualizations generated from authored templates.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, Orsay, France"],"email":"anna.offenwanger@gmail.com","is_corresponding":true,"name":"Anna Offenwanger"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Inria, LISN, Orsay, France"],"email":"theophanis.tsandilas@inria.fr","is_corresponding":false,"name":"Theophanis Tsandilas"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":false,"name":"Fanny Chevalier"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anna Offenwanger"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1502","time_end":"","time_stamp":"","time_start":"","title":"DataGarden: Formalizing Personal Sketches into Structured Visualization Templates","uid":"v-full-1502","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1503":{"abstract":"The increasing reliance on Large Language Models (LLMs) for health information seeking can pose severe risks due to the potential for misinformation and the complexity of these topics. This paper introduces KnowNet, a visualization system that integrates LLMs with Knowledge Graphs (KG) to provide enhanced accuracy and structured exploration. One core idea in KnowNet is to conceptualize the understanding of a subject as the gradual construction of graph visualization, aligning the user's cognitive process with both the structured data in KGs and the unstructured outputs from LLMs. Specifically, we extracted triples (e.g., entities and their relations) from LLM outputs and mapped them into the validated information and supported evidence in external KGs. Based on the neighborhood of the currently explored entities in KGs, KnowNet provides recommendations for further inquiry, aiming to guide a comprehensive understanding without overlooking critical aspects. A progressive graph visualization is proposed to show the alignment between LLMs and KGs, track previous inquiries, and connect this history with current queries and next-step recommendations. We demonstrate the effectiveness of our system via use cases and expert interviews.","accessible_pdf":false,"authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"yan00111@umn.edu","is_corresponding":false,"name":"Youfu Yan"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"hou00127@umn.edu","is_corresponding":false,"name":"Yu Hou"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"xiao0290@umn.edu","is_corresponding":false,"name":"Yongkang Xiao"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"zhan1386@umn.edu","is_corresponding":false,"name":"Rui Zhang"},{"affiliations":["University of Minnesota, Minneapolis , United States"],"email":"qianwen@umn.edu","is_corresponding":true,"name":"Qianwen Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qianwen Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1503","time_end":"","time_stamp":"","time_start":"","title":"Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration","uid":"v-full-1503","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1504":{"abstract":"A wide range of visualization authoring interfaces enable the creation of highly customized visualizations. However, prioritizing expressiveness often impedes the learnability of the authoring interface. The diversity of users, such as varying computational skills and prior experiences in user interfaces, makes it even more challenging for a single authoring interface to satisfy the needs of a broad audience. In this paper, we introduce a framework to balance learnability and expressivity in a visualization authoring system. Adopting insights from learnability studies, such as multimodal interaction and visualization literacy, we explore the design space of blending multiple visualization authoring interfaces for supporting authoring tasks in a complementary and flexible manner. To evaluate the effectiveness of blending interfaces, we implemented a proof-of-concept system, Blace, that combines four common visualization authoring interfaces\u2014template-based, shelf configuration, natural language, and code editor\u2014that are tightly linked to one another to help users easily relate unfamiliar interfaces to more familiar ones. Using the system, we conducted a user study with 12 domain experts who regularly visualize genomics data as part of their analysis workflow. Participants with varied visualization and programming backgrounds were able to successfully reproduce complex visualization examples without a guided tutorial in the study. Feedback from a post-study qualitative questionnaire further suggests that blending interfaces enabled participants to learn the system easily and assisted them in confidently editing unfamiliar visualization grammar in the code editor, enabling expressive customization. Reflecting on our study results and the design of our system, we discuss the different interaction patterns that we identified and design implications for blending visualization authoring interfaces.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"sehi_lyi@hms.harvard.edu","is_corresponding":true,"name":"Sehi L'Yi"},{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.v.d.brandt@tue.nl","is_corresponding":false,"name":"Astrid van den Brandt"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"etowah_adams@hms.harvard.edu","is_corresponding":false,"name":"Etowah Adams"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"huyen_nguyen@hms.harvard.edu","is_corresponding":false,"name":"Huyen N. Nguyen"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sehi L'Yi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1504","time_end":"","time_stamp":"","time_start":"","title":"Learnable and Expressive Visualization Authoring Through Blended Interfaces","uid":"v-full-1504","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1522":{"abstract":"Despite the recent surge of research efforts to make data visualizations accessible to people who are blind or have low-vision (BLV), how to support BLV people's data analysis remains an important and challenging question. As refreshable tactile displays (RTDs) become cheaper and conversational agents continue to improve, their combination provides a promising approach to support BLV people's interactive data analysis. To understand how BLV people would use and react to a system combining an RTD with a conversational agent, we conducted a Wizard-of-Oz study with 11 BLV participants involving line graphs, bar charts, and isarithmic maps. From an analysis of participant interactions, we identified nine distinct patterns and learned that the choice of modalities depended on the type of task and prior experience with tactile graphics. We also found that participants strongly preferred the combination of RTD and speech to a single modality, and that participants with more tactile experience described how tactile images facilitated deeper engagement with the data and supported independent interpretation. Our findings will inform the design of interfaces for such interactive mixed-modality systems.","accessible_pdf":false,"authors":[{"affiliations":["Monash University, Melbourne, Australia"],"email":"samuel.reinders@monash.edu","is_corresponding":true,"name":"Samuel Reinders"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"matthew.butler@monash.edu","is_corresponding":false,"name":"Matthew Butler"},{"affiliations":["Monash University, Clayton, Australia"],"email":"ingrid.zukerman@monash.edu","is_corresponding":false,"name":"Ingrid Zukerman"},{"affiliations":["Yonsei University, Seoul, Korea, Republic of","Microsoft Research, Redmond, United States"],"email":"b.lee@yonsei.ac.kr","is_corresponding":false,"name":"Bongshin Lee"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"lizhen.qu@monash.edu","is_corresponding":false,"name":"Lizhen Qu"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"kim.marriott@monash.edu","is_corresponding":false,"name":"Kim Marriott"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Samuel Reinders"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1522","time_end":"","time_stamp":"","time_start":"","title":"When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech","uid":"v-full-1522","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1533":{"abstract":"We introduce DiffFit, a differentiable algorithm for fitting protein atomistic structures into experimental reconstructed Cryo-Electron Microscopy (cryo-EM) volume map. This process is essential in structural biology to semi-automatically reconstruct large meso-scale models of complex protein assemblies and complete cellular structures that are based on measured cryo-EM data. Current approaches require manual fitting in 3D that already results in approximately aligned structures followed by an automated fine-tuning of the alignment. With our DiffFit approach, we enable domain scientists to automatically fit new structures and visualize the fitting results for inspection and interactive revision. Our fitting begins with differentiable 3D rigid transformations of the protein atom coordinates, followed by sampling the density values at its atom coordinates from the target cryo-EM volume. To ensure a meaningful correlation between the sampled densities and the protein structure, we propose a novel loss function based on a multi-resolution volume-array approach and the exploitation of the negative space. Such loss function serves as a critical metric for assessing the fitting quality, ensuring both fitting accuracy and improved visualization of the results. We assessed the placement quality of DiffFit with several large, realistic datasets and found its quality to be superior to that of previous methods. We further evaluated our method in two use cases. First, we demonstrate its use in the process of automating the integration of known composite structures into larger protein complexes. Second, we show that it facilitates the fitting of predicted protein domains into volume densities to aid researchers in the identification of unknown proteins. We implemented our algorithm as an open-source plugin (github.com/nanovis/DiffFitViewer) in ChimeraX, a leading visualization software in the field. All supplemental materials are available at osf.io/5tx4q.","accessible_pdf":false,"authors":[{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"deng.luo@kaust.edu.sa","is_corresponding":true,"name":"Deng Luo"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"zainab.alsuwaykit@kaust.edu.sa","is_corresponding":false,"name":"Zainab Alsuwaykit"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"dawar.khan@kaust.edu.sa","is_corresponding":false,"name":"Dawar Khan"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"ondrej.strnad@kaust.edu.sa","is_corresponding":false,"name":"Ond\u0159ej Strnad"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"ivan.viola@kaust.edu.sa","is_corresponding":false,"name":"Ivan Viola"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Deng Luo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1533","time_end":"","time_stamp":"","time_start":"","title":"DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map","uid":"v-full-1533","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1544":{"abstract":"Large Language Models (LLMs) have been successfully adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways from visualizations? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as the spatial arrangement. In this work, we examine how well LLMs can predict such design choice sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We test four common chart arrangements: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked, through three experimental phases. In Phase I, we identified the optimal configuration of LLMs to generate meaningful chart takeaways, across three LLM models (GPT3.5, GPT4, GPT4V, and Gemini 1.0 Pro), two temperature settings (0, 0.7), four chart specifications (Vega-Lite, Matplotlib, ggplot2, and scene graphs), and several prompting strategies. We found that even state-of-the-art LLMs can struggle to generate factually accurate takeaways. In Phase 2, using the most optimal LLM configuration, we generated 30 chart takeaways across the four arrangements of bar charts using two datasets, with both zero-shot and one-shot settings. Compared to data on human takeaways from prior work, we found that the takeaways LLMs generate often do not align with human comparisons. In Phase 3, we examined the effect of the charts\u2019 underlying data values on takeaway alignment between humans and LLMs, and found both matches and mismatches. Overall, our work evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human-aligned chart takeaways.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"wwill@cs.washington.edu","is_corresponding":true,"name":"Huichen Will Wang"},{"affiliations":["Adobe Research, Seattle, United States"],"email":"jhoffs@adobe.com","is_corresponding":false,"name":"Jane Hoffswell"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"yukithane@gmail.com","is_corresponding":false,"name":"Sao Myat Thazin Thane"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"victorbursztyn2022@u.northwestern.edu","is_corresponding":false,"name":"Victor S. Bursztyn"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"cxiong@gatech.edu","is_corresponding":false,"name":"Cindy Xiong Bearfield"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Huichen Will Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1544","time_end":"","time_stamp":"","time_start":"","title":"How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts","uid":"v-full-1544","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1547":{"abstract":"Visual validation of regression models in scatterplots is a common practice for assessing model quality, yet its efficacy remains unquantified. We conducted two empirical experiments to investigate individuals' ability to visually validate linear regression models (linear trends) and to examine the impact of common visualization designs on validation quality. The first experiment showed that the level of accuracy for visual estimation of slope (i.e., fitting a line to data) is higher than for visual validation of slope (i.e., accepting a shown line). Notably, we found bias toward slopes that are ''too steep'' in both cases. This lead to novel insights that participants naturally assessed regression with orthogonal distances between the points and the line (i.e., ODR regression) rather than the common vertical distances (OLS regression). In the second experiment, we investigated whether incorporating common designs for regression visualization (error lines, bounding boxes, and confidence intervals) would improve visual validation. Even though error lines reduced validation bias, results failed to show the desired improvements in accuracy for any design. Overall, our findings suggest caution in using visual model validation for linear trends in scatterplots.","accessible_pdf":false,"authors":[{"affiliations":["University of Cologne, Cologne, Germany"],"email":"braun@cs.uni-koeln.de","is_corresponding":true,"name":"Daniel Braun"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"},{"affiliations":["University of Wisconsin - Madison, Madison, United States"],"email":"gleicher@cs.wisc.edu","is_corresponding":false,"name":"Michael Gleicher"},{"affiliations":["University of Cologne, Cologne, Germany"],"email":"landesberger@cs.uni-koeln.de","is_corresponding":false,"name":"Tatiana von Landesberger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Braun"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1547","time_end":"","time_stamp":"","time_start":"","title":"Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots","uid":"v-full-1547","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1568":{"abstract":"Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns in dimensionality reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections.","accessible_pdf":false,"authors":[{"affiliations":["Tufts University, Medford, United States"],"email":"brianmontambault@gmail.com","is_corresponding":true,"name":"Brian Montambault"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["Tufts University, Boston, United States"],"email":"jen@cs.tufts.edu","is_corresponding":false,"name":"Jen Rogers"},{"affiliations":["Tufts University, Medford, United States"],"email":"camelia_daniela.brumar@tufts.edu","is_corresponding":false,"name":"Camelia D. Brumar"},{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"mingwei.li@tufts.edu","is_corresponding":false,"name":"Mingwei Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Brian Montambault"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1568","time_end":"","time_stamp":"","time_start":"","title":"DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic","uid":"v-full-1568","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1571":{"abstract":"Effective security patrol management is critical for ensuring safety in diverse environments such as art galleries, airports, and factories. The behavior of patrols in these situations can be modeled by patrolling games. They simulate the behavior of the patrol and adversary in the building, which is modeled as a graph of interconnected nodes representing rooms. The designers of algorithms solving the game face the problem of analyzing complex graph layouts with temporal dependencies. Therefore, appropriate visual support is crucial for them to work effectively. In this paper, we present a novel tool that helps the designers of patrolling games explore the outcomes of the proposed algorithms and approaches, evaluate their success rate, and propose modifications that can improve their solutions. Our tool offers an intuitive and interactive interface, featuring a detailed exploration of patrol routes and probabilities of taking them, simulation of patrols, and other requested features. In close collaboration with experts in designing patrolling games, we conducted three case studies demonstrating the usage and usefulness of our tool. The prototype of the tool, along with exemplary datasets, is available at https://gitlab.fi.muni.cz/formela/strategy-vizualizer.","accessible_pdf":false,"authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"langm@mail.muni.cz","is_corresponding":true,"name":"Mat\u011bj Lang"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"469242@mail.muni.cz","is_corresponding":false,"name":"Adam \u0160t\u011bp\u00e1nek"},{"affiliations":["Faculty of Informatics, Masaryk University, Brno, Czech Republic"],"email":"514179@mail.muni.cz","is_corresponding":false,"name":"R\u00f3bert Zvara"},{"affiliations":["Faculty of Informatics, Masaryk University, Brno, Czech Republic"],"email":"rehak@fi.muni.cz","is_corresponding":false,"name":"Vojt\u011bch \u0158eh\u00e1k"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mat\u011bj Lang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1571","time_end":"","time_stamp":"","time_start":"","title":"Who Let the Guards Out: Visual Support for Patrolling Games","uid":"v-full-1571","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1574":{"abstract":"The numerical extraction of vortex cores from time-dependent fluid flow attracted much attention over the past decades. A commonly agreed upon vortex definition remained elusive since a proper vortex core needs to satisfy two hard constraints: it must be objective and Lagrangian. Recent methods on objectivization met the first but not the second constraint, since there was no formal guarantee that the resulting vortex coreline is indeed a pathline of the fluid flow. In this paper, we propose the first vortex core definition that is both objective and Lagrangian. Our approach restricts observer motions to follow along pathlines, which reduces the degrees of freedoms: we only need to optimize for an observer rotation that makes the observed flow as steady as possible. This optimization succeeds along Lagrangian vortex corelines and will result in a non-zero time-partial everywhere else. By performing this optimization at each point of a spatial grid, we obtain a residual scalar field, which we call vortex deviation error. The local minima on the grid serve as seed points for a gradient descent optimization that delivers sub-voxel accurate corelines. The visualization of both 2D and 3D vortex cores is based on the separation of the movement of the vortex core and the swirling flow behavior around it. While the vortex core is represented by a pathline, the swirling motion around it is visualized by streamlines in the correct frame. We demonstrate the utility of the approach on several 2D and 3D time-dependent vector fields.","accessible_pdf":false,"authors":[{"affiliations":["Friedrich-Alexander-University Erlangen-N\u00fcrnberg, Erlangen, Germany"],"email":"tobias.guenther@fau.de","is_corresponding":true,"name":"Tobias G\u00fcnther"},{"affiliations":["University of Magdeburg, Magdeburg, Germany"],"email":"theisel@ovgu.de","is_corresponding":false,"name":"Holger Theisel"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias G\u00fcnther"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1574","time_end":"","time_stamp":"","time_start":"","title":"Objective Lagrangian Vortex Cores and their Visual Representations","uid":"v-full-1574","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1594":{"abstract":"The visualization community has a rich history of reflecting upon visualization design flaws. Although research in this area has remained lively, we believe it is essential to continuously revisit this classic and critical topic in visualization research by incorporating more empirical evidence from diverse sources, characterizing new design flaws, building more systematic theoretical frameworks, and understanding the underlying reasons for these flaws. To address the above gaps, this work investigated visualization design flaws through the lens of the public, constructed a framework to summarize and categorize the identified flaws, and explored why these flaws occur. Specifically, we analyzed 2227 flawed data visualizations collected from an online gallery and derived a design task-associated taxonomy containing 76 specific design flaws. These flaws were further classified into three high-level categories (i.e., misinformation, uninformativeness, unsociability) and ten subcategories (e.g., inaccuracy, unfairness, ambiguity). Next, we organized five focus groups to explore why these design flaws occur and identified seven causes of the flaws. Finally, we proposed a research agenda for combating visualization design flaws and summarize nine research opportunities.","accessible_pdf":false,"authors":[{"affiliations":["Fudan University, Shanghai, China","Fudan University, Shanghai, China"],"email":"xingyulan96@gmail.com","is_corresponding":true,"name":"Xingyu Lan"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom","University of Edinburgh, Edinburgh, United Kingdom"],"email":"coraline.liu.dataviz@gmail.com","is_corresponding":false,"name":"Yu Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xingyu Lan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1594","time_end":"","time_stamp":"","time_start":"","title":"I Came Across a Junk: Understanding Design Flaws of Data Visualization from the Public's Perspective","uid":"v-full-1594","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1595":{"abstract":"Assigning discriminable and harmonic colors to samples according to their class labels and spatial distribution can generate attractive visualizations and facilitate data exploration. However, as the number of classes increases, it is challenging to generate a high-quality color assignment result that accommodates all classes simultaneously. A practical solution is to organize classes into a hierarchy and then dynamically assign colors during exploration. However, existing color assignment methods fall short in generating high-quality color assignment results and dynamically aligning them with hierarchical structures. To address this issue, we develop a dynamic color assignment method for hierarchical data, which is formulated as a multi-objective optimization problem. This method simultaneously considers color discriminability, color harmony, and spatial distribution at each hierarchical level. By using the colors of parent classes to guide the color assignment of their child classes, our method further promotes both consistency and clarity across hierarchical levels. We demonstrate the effectiveness of our method in generating dynamic color assignment results with quantitative experiments and a user study.","accessible_pdf":false,"authors":[{"affiliations":["Tsinghua University, Beijing, China"],"email":"jiashu0717c@gmail.com","is_corresponding":true,"name":"Jiashu Chen"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"vicayang496@gmail.com","is_corresponding":false,"name":"Weikai Yang"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"jiazl22@mails.tsinghua.edu.cn","is_corresponding":false,"name":"Zelin Jia"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"tarolancy@gmail.com","is_corresponding":false,"name":"Lanxi Xiao"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"shixia@tsinghua.edu.cn","is_corresponding":false,"name":"Shixia Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jiashu Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1595","time_end":"","time_stamp":"","time_start":"","title":"Dynamic Color Assignment for Hierarchical Data","uid":"v-full-1595","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1597":{"abstract":"In understanding and redesigning the function of proteins in modern biochemistry, protein engineers are increasingly focusing on exploring regions in proteins called loops. Analyzing various characteristics of these regions helps the experts design the transfer of the desired function from one protein to another. This process is denoted as loop grafting. We designed a set of interactive visualizations that provide experts with visual support through all the loop grafting pipeline steps. The workflow is divided into several phases, reflecting the steps of the pipeline. Each phase is supported by a specific set of abstracted 2D visual representations of proteins and their loops that are interactively linked with the 3D View of proteins. By sequentially passing through the individual phases, the user shapes the list of loops that are potential candidates for loop grafting. Finally, the actual in-silico insertion of the loop candidates from one protein to the other is performed, and the results are visually presented to the user. In this way, the fully computational rational design of proteins and their loops results in newly designed protein structures that can be further assembled and tested through in-vitro experiments. We showcase the contribution of our visual support design on a real case scenario changing the enantiomer selectivity of the engineered enzyme. Moreover, we provide the readers with the experts' feedback.","accessible_pdf":false,"authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kiraa@mail.muni.cz","is_corresponding":false,"name":"Filip Op\u00e1len\u00fd"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"paloulbrich@gmail.com","is_corresponding":false,"name":"Pavol Ulbrich"},{"affiliations":["Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital, Brno, Czech Republic"],"email":"joan.planas@mail.muni.cz","is_corresponding":false,"name":"Joan Planas-Iglesias"},{"affiliations":["Masaryk University, Brno, Czech Republic","University of Bergen, Bergen, Norway"],"email":"xbyska@fi.muni.cz","is_corresponding":false,"name":"Jan By\u0161ka"},{"affiliations":["Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital, Brno, Czech Republic"],"email":"stourac.jan@gmail.com","is_corresponding":false,"name":"Jan \u0160toura\u010d"},{"affiliations":["Faculty of Science, Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital Brno, Brno, Czech Republic"],"email":"222755@mail.muni.cz","is_corresponding":false,"name":"David Bedn\u00e1\u0159"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"katarina.furmanova@gmail.com","is_corresponding":true,"name":"Katar\u00edna Furmanov\u00e1"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Katar\u00edna Furmanov\u00e1"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1597","time_end":"","time_stamp":"","time_start":"","title":"Visual Support for the Loop Grafting Workflow on Proteins","uid":"v-full-1597","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1599":{"abstract":"Existing deep learning-based surrogate models facilitate efficient data generation, but fall short in uncertainty quantification, efficient parameter space exploration, and reverse prediction. In our work, we introduce SurroFlow, a novel normalizing flow-based surrogate model, to learn the invertible transformation between simulation parameters and simulation outputs. The model not only allows accurate predictions of simulation outcomes for a given simulation parameter but also supports uncertainty quantification in the data generation process. Additionally, it enables efficient simulation parameter recommendation and exploration. We integrate SurroFlow and a genetic algorithm as the backend of a visual interface to support effective user-guided ensemble simulation exploration and visualization. Our framework significantly reduces the computational costs while enhancing the reliability and exploration capabilities of scientific surrogate models.","accessible_pdf":false,"authors":[{"affiliations":["The Ohio State University, Columbus, United States","The Ohio State University, Columbus, United States"],"email":"shen.1250@osu.edu","is_corresponding":true,"name":"JINGYI SHEN"},{"affiliations":["The Ohio State University, Columbus, United States","The Ohio State University, Columbus, United States"],"email":"duan.418@osu.edu","is_corresponding":false,"name":"Yuhan Duan"},{"affiliations":["The Ohio State University , Columbus , United States","The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["JINGYI SHEN"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1599","time_end":"","time_stamp":"","time_start":"","title":"SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification","uid":"v-full-1599","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1603":{"abstract":"Multi-modal embeddings form the foundation for vision-language models, such as CLIP embeddings, the most widely used text-image embeddings. However, these embeddings are hard to interpret and vulnerable to subtle misalignment of cross-modal features, resulting in decreased model performance and diminished generalization. To address this problem, we design ModalChorus, an interactive system for visual probing and alignment of multi-modal embeddings. ModalChorus primarily offers a two-stage process: 1) embedding probing with Modal Fusion Map (MFM), a novel parametric dimensionality reduction method that integrates both metric and nonmetric objectives to enhance modality fusion; and 2) embedding alignment that allows users to interactively articulate intentions for both point-set and set-set alignments. Quantitative and qualitative comparisons for CLIP embeddings with existing dimensionality reduction (e.g., t-SNE and MDS) and data fusion (e.g., data context map) methods demonstrate the advantages of MFM in showcasing cross-modal features over common vision-language datasets. Case studies reveal that ModalChorus can facilitate intuitive discovery of misalignment and efficient re-alignment in scenarios ranging from zero-shot classification to cross-modal retrieval and generation.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"yyebd@connect.ust.hk","is_corresponding":true,"name":"Yilin Ye"},{"affiliations":["The Hong Kong University of Science and Technology(Guangzhou), Guangzhou, China"],"email":"sxiao713@connect.hkust-gz.edu.cn","is_corresponding":false,"name":"Shishi Xiao"},{"affiliations":["the Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"xingchen.zeng@outlook.com","is_corresponding":false,"name":"Xingchen Zeng"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China","The Hong Kong University of Science and Technology, Hong Kong SAR, China"],"email":"weizeng@hkust-gz.edu.cn","is_corresponding":false,"name":"Wei Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yilin Ye"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1603","time_end":"","time_stamp":"","time_start":"","title":"ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map","uid":"v-full-1603","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1606":{"abstract":"With the increase of graph size, it becomes difficult or even impossible to visualize graph structures clearly within the limited screen space. Consequently, it is crucial to design effective visual representations for large graphs. In this paper, we propose AdaMotif, a novel approach that can capture the essential structure patterns of large graphs and effectively reveal the overall structures via adaptive motif designs. Specifically, our approach involves partitioning a given large graph into multiple subgraphs, then clustering similar subgraphs and extracting similar structural information within each cluster. Subsequently, adaptive motifs representing each cluster are generated and utilized to replace the corresponding subgraphs, leading to a simplified visualization. Our approach aims to preserve as much information from the subgraphs as possible, effectively simplifying graphs while minimizing information loss. Notably, our approach successfully visualizes crucial community information within a large graph. We conduct case studies and a user study using both synthetic and real-world graphs to validate the effectiveness of our proposed approach. The results demonstrate the capability of our approach in simplifying graphs while retaining important structural and community information.","accessible_pdf":false,"authors":[{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"hzhou@szu.edu.cn","is_corresponding":true,"name":"Hong Zhou"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"laipeifeng1111@gmail.com","is_corresponding":false,"name":"Peifeng Lai"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"zhida.sun@connect.ust.hk","is_corresponding":false,"name":"Zhida Sun"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"2310274034@email.szu.edu.cn","is_corresponding":false,"name":"Xiangyuan Chen"},{"affiliations":["Shenzhen University, Shen Zhen, China"],"email":"275621136@qq.com","is_corresponding":false,"name":"Yang Chen"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"hswu@szu.edu.cn","is_corresponding":false,"name":"Huisi Wu"},{"affiliations":["Nanyang Technological University, Singapore, Singapore"],"email":"yong-wang@ntu.edu.sg","is_corresponding":false,"name":"Yong WANG"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hong Zhou"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1606","time_end":"","time_stamp":"","time_start":"","title":"AdaMotif: Graph Simplification via Adaptive Motif Design","uid":"v-full-1606","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1612":{"abstract":"Partitionings (or segmentations) divide a given domain into disjoint connected regions whose union forms again the entire domain. Multi-dimensional partitionings occur, for example, when analyzing parameter spaces of simulation models, where each segment of the partitioning represents a region of similar model behavior. Having computed a partitioning, one is commonly interested in understanding how large the segments are and which segments lie next to each other. While visual representations of 2D domain partitionings that reveal sizes and neighborhoods are straightforward, this is no longer the case when considering multi-dimensional domains of three or more dimensions. We propose an algorithm for computing 2D embeddings of multi-dimensional partitionings. The embedding shall have the following properties: It shall maintain the topology of the partitioning and optimize the area sizes and joint boundary lengths of the embedded segments to match the respective sizes and lengths in the multi-dimensional domain. We demonstrate the effectiveness of our approach by applying it to different use cases, including the visual exploration of 3D spatial domain segmentations and multi-dimensional parameter space partitionings of simulation ensembles. We numerically evaluate our algorithm with respect to how well sizes and lengths are preserved depending on the dimensionality of the domain and the number of segments.","accessible_pdf":false,"authors":[{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"m_ever14@uni-muenster.de","is_corresponding":true,"name":"Marina Evers"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"linsen@uni-muenster.de","is_corresponding":false,"name":"Lars Linsen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Marina Evers"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1612","time_end":"","time_stamp":"","time_start":"","title":"2D Embeddings of Multi-dimensional Partitionings","uid":"v-full-1612","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1613":{"abstract":"We present a path-based design model and system for designing and creating visualisations. Our model represents a systematic approach to constructing visual representations of data or concepts following a predefined sequence of steps. The initial step involves outlining the overall appearance of the visualisation by creating a skeleton structure, referred to as a flowpath. Subsequently, we specify objects, visual marks, properties, and appearance, storing them in a gene. Lastly, we map data onto the flowpath, ensuring suitable morphisms. Alternative designs are created by exchanging values in the gene. For example, designs that share similar traits, are created by making small incremental changes to the gene. Our design method develops a wide variety of creative ideas, space-filling visualisations, and traditional designs (bar chart, pie chart etc.) Our implementation, demonstrates the model, and we apply the output visualisations onto a smart-watch and on visualisation dashboards. In this article we (1) introduce, define and explain the path model and discuss possibilities for its use, (2) present our implementation, results, and evaluation, and (3) demonstrate and evaluate an application of its use on a mobile watch.","accessible_pdf":false,"authors":[{"affiliations":["ExaDev, Gaerwen, United Kingdom","Bangor University, Bangor, United Kingdom"],"email":"james.ogge@gmail.com","is_corresponding":false,"name":"James R Jackson"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.ritsos@bangor.ac.uk","is_corresponding":false,"name":"Panagiotis D. Ritsos"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.butcher@bangor.ac.uk","is_corresponding":false,"name":"Peter W. S. Butcher"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"j.c.roberts@bangor.ac.uk","is_corresponding":true,"name":"Jonathan C Roberts"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jonathan C Roberts"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1613","time_end":"","time_stamp":"","time_start":"","title":"Path-based Design Model for Constructing and Exploring Alternative Visualisations","uid":"v-full-1613","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1615":{"abstract":"We present Cell2Cell, a novel visual analytics approach for quantifying and visualizing networks of cell-cell interactions in three-dimensional (3D) multi-channel cancerous tissue data. By analyzing cellular interactions, biomedical domain experts can gain a more accurate understanding of the intricate relationships between cancer and immune cells. Recent methods have focused on inferring interaction based on the proximity of cells in low-resolution 2D multi-channel imaging data. By contrast, we analyze cell interactions by quantifying the intensities of protein expressions extracted from high-resolution 3D multi-channel volume data. Such analyses have a strong exploratory nature and require a tight integration of domain experts in the analysis loop to leverage their deep knowledge. We propose two complementary semi-automated approaches to cope with the increasing size and complexity of the data in an interactive fashion: On the one hand, we interpret cell-to-cell interactions as edges in a cell graph and analyze the image signal (protein expressions) along those edges, using spatial as well as abstract data visualizations. Complementary, we propose a cell-centered approach, enabling scientists to visually analyze polarized distributions of proteins in three dimensions, which also captures neighboring cells with biochemical and cell biological consequences. We evaluate our application in two case studies, where computational biologists and medical experts use \\tool to investigate tumor micro-environments to identify and quantify T-cell activation in human tissue data. We confirmed that our tool can fully solve both use cases and enables a streamlined and detailed analysis of cell-cell interactions.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"eric.moerth@gmx.at","is_corresponding":true,"name":"Eric M\u00f6rth"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"kevin.sidak@univie.ac.at","is_corresponding":false,"name":"Kevin Sidak"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"zoltan_maliga@hms.harvard.edu","is_corresponding":false,"name":"Zoltan Maliga"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"torsten.moeller@univie.ac.at","is_corresponding":false,"name":"Torsten M\u00f6ller"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"peter_sorger@hms.harvard.edu","is_corresponding":false,"name":"Peter Sorger"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"pfister@seas.harvard.edu","is_corresponding":false,"name":"Hanspeter Pfister"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"jbeyer@g.harvard.edu","is_corresponding":false,"name":"Johanna Beyer"},{"affiliations":["New York University, New York, United States","Harvard University, Boston, United States"],"email":"rk4815@nyu.edu","is_corresponding":false,"name":"Robert Kr\u00fcger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Eric M\u00f6rth"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1615","time_end":"","time_stamp":"","time_start":"","title":"Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data","uid":"v-full-1615","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1626":{"abstract":"We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures---often at multiple spatial or semantic scales---across various application domains and requiring diverse visual representations for effective visualization. To understand user reactions to our new environment, we began with an elicitation user study, in which we captured their responses and interactions. We observed that users adapted their interaction approaches based on perceived visual representations, with natural transitions in spatial awareness and actions while navigating across the physical surface. Our findings then informed the development of a design space for spatial data exploration in cross-reality. We thus developed cross-reality environments tailored to three distinct domains: for 3D molecular structure data, for 3D point cloud data, and for 3D anatomical data. In particular, we designed interaction techniques that account for the inherent features of interactions in both spaces, facilitating various forms of interaction including mid-air gestures, touch interactions, pen interactions, and combinations thereof to enhance the users' sense of presence and engagement. We assessed the usability of our environment with biologists, focusing on its use for domain research. In addition, we evaluated our interaction transition designs with virtual and mixed-reality experts to gather further insights. As a result, we provide our design suggestions for the cross-reality environment, emphasizing the interaction with diverse visual representations and seamless interaction transitions between 2D and 3D spaces.","accessible_pdf":false,"authors":[{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"lixiang.zhao17@student.xjtlu.edu.cn","is_corresponding":false,"name":"Lixiang Zhao"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"fuqi.xie20@student.xjtlu.edu.cn","is_corresponding":false,"name":"Fuqi Xie"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"hainingliang@hkust-gz.edu.cn","is_corresponding":false,"name":"Hai-Ning Liang"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"lingyun.yu@xjtlu.edu.cn","is_corresponding":true,"name":"Lingyun Yu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lingyun Yu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1626","time_end":"","time_stamp":"","time_start":"","title":"SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality","uid":"v-full-1626","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1632":{"abstract":"High-dimensional data, characterized by many features, can be difficult to visualize effectively. Dimensionality reduction techniques, such as PCA, UMAP, and t-SNE, address this challenge by projecting the data into a lower-dimensional space while preserving important relationships. TopoMap is another technique that excels at preserving the underlying structure of the data, leading to interpretable visualizations. In particular, TopoMap maps the high-dimensional data into a visual space, guaranteeing that the 0-dimensional persistence diagram of the Rips filtration of the visual space matches the one from the high-dimensional data. However, the original Topomap algorithm can be slow and its layout can be too sparse for large and complex datasets. In this paper, we propose three improvements to TopoMap: 1) a more space-efficient layout, 2) a significantly faster implementation, and 3) a novel treemap-based representation to aid the exploration of the projections. These advancements make TopoMap, now referred to as TopoMap++, a more powerful tool for visualizing high-dimensional data, similar to how t-SNE surpassed SNE in popularity.","accessible_pdf":false,"authors":[{"affiliations":["New York University, New York City, United States"],"email":"vitoriaguardieiro@gmail.com","is_corresponding":true,"name":"Vitoria Guardieiro"},{"affiliations":["New York University, New York City, United States"],"email":"felipedeoliveira1407@gmail.com","is_corresponding":false,"name":"Felipe Inagaki de Oliveira"},{"affiliations":["Microsoft Research India, Bangalore, India"],"email":"harish.doraiswamy@microsoft.com","is_corresponding":false,"name":"Harish Doraiswamy"},{"affiliations":["University of Sao Paulo, Sao Carlos, Brazil"],"email":"gnonato@icmc.usp.br","is_corresponding":false,"name":"Luis Gustavo Nonato"},{"affiliations":["New York University, New York City, United States"],"email":"csilva@nyu.edu","is_corresponding":false,"name":"Claudio Silva"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Vitoria Guardieiro"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1632","time_end":"","time_stamp":"","time_start":"","title":"TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees","uid":"v-full-1632","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1638":{"abstract":"Probability density function (PDF) curves are among the few charts on a Cartesian coordinate system that are commonly presented without y-axes. This design decision may be due to the lack of relevance of vertical scaling in normal PDFs. In fact, as long as two normal PDFs have the same mean and standard deviations (SDs), they can be scaled to occupy different amounts of vertical space while still remaining statistically identical. Because unscaled PDF height increases as SD decreases, visualization designers may find themselves tempted to vertically shrink low-SD PDFs to avoid occlusion or save white space in their figures. While irregular vertical scaling has been explored in bar and line charts, the visualization community has yet to investigate how this purely visual manipulation may affect reader comparisons of PDFs. In this paper, we present two preregistered quantitative experiments (n=600, n=401) that systematically demonstrate that vertical scaling can lead to misinterpretations of PDFs. We also test visual interventions to mitigate misinterpretation. In some contexts, we find that including a y-axis reduces this effect. Overall, we find that keeping vertical scaling consistent, and therefore maintaining equal pixel areas under PDF curves, results in the highest likelihood of accurate comparisons. Our findings provide the first insights into the impact of vertical scaling on PDFs, and reveal the complicated nature of proportional area comparisons.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"racquel.fygenson@gmail.com","is_corresponding":true,"name":"Racquel Fygenson"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Racquel Fygenson"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1638","time_end":"","time_stamp":"","time_start":"","title":"The Impact of Vertical Scaling on Normal Probability Density Function Plots","uid":"v-full-1638","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1642":{"abstract":"Despite the development of numerous visual analytics tools for event sequence data across various domains, including, but not limited to healthcare, digital marketing, and user behavior analysis, comparing these domain-specific investigations and transferring the results to new datasets and problem areas remain challenging. Task abstractions can help us go beyond domain-specific details, but existing visualization task abstractions are insufficient for event sequence visual analytics because they primarily focus on tabular datasets and often overlook automated analytical techniques. To address this gap, we propose a domain-agnostic multi-level task framework for event sequence analysis, derived from an analysis of 58 papers that present event sequence visualization systems. Our framework consists of four levels: objective, intent, strategy, and technique. Overall objectives identify the main goals of analysis. Intents comprises five high-level approaches adopted at each analysis step: augment data, simplify data, configure data, configure visualization, and create provenance. Each intent is accomplished through a number of strategies, for instance, data simplification can be achieved through aggregation, summarization, or segmentation. Finally, each strategy can be implemented by a set of techniques depending on the input and output components. We further show that techniques can be expressed through a quartet of action-input-output-criteria. We demonstrate the framework\u2019s power through mapping case studies and discuss its similarities and differences with previous event sequence task taxonomies.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, College Park, United States"],"email":"kzintas@umd.edu","is_corresponding":true,"name":"Kazi Tasnim Zinat"},{"affiliations":["University of Maryland, College Park, United States"],"email":"ssakhamu@terpmail.umd.edu","is_corresponding":false,"name":"Saimadhav Naga Sakhamuri"},{"affiliations":["University of Maryland, College Park, United States"],"email":"achen151@terpmail.umd.edu","is_corresponding":false,"name":"Aaron Sun Chen"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kazi Tasnim Zinat"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1642","time_end":"","time_stamp":"","time_start":"","title":"A Multi-Level Task Framework for Event Sequence Analysis","uid":"v-full-1642","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1681":{"abstract":"In recent years, the global adoption of electric vehicles (EVs) has surged, prompting a corresponding rise in the installation of charging stations. This proliferation has underscored the importance of expediting the deployment of charging infrastructure. Both academia and industry have thus devoted to addressing the charging station location problem (CSLP) to streamline this process. However, prevailing algorithms addressing CSLP are hampered by restrictive assumptions and computational overhead, leading to a dearth of comprehensive evaluations in the spatiotemporal dimensions. Consequently, their practical viability is restricted. Moreover, the placement of charging stations exerts a significant impact on both the road network and the power grid, which necessitates the evaluation of the potential post-deployment impacts on these interconnected networks holistically. In this study, we propose CSLens, a visual analytics system designed to inform charging station deployment decisions through the lens of coupled transportation and power networks. CSLens offers multiple visualizations and interactive features, empowering users to delve into the existing charging station layout, explore alternative deployment solutions, and assess the ensuring impact. To validate the efficacy of CSLens, we conducted two case studies and engaged in interviews with domain experts. Through these efforts, we substantiated the usability and practical utility of CSLens in enhancing the decision-making process surrounding charging station deployment. Our findings underscore CSLens\u2019s potential to serve as a valuable asset in navigating the complexities of charging infrastructure planning.","accessible_pdf":false,"authors":[{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"zhangyt85@mail2.sysu.edu.cn","is_corresponding":false,"name":"Yutian Zhang"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"xulw8@mail2.sysu.edu.cn","is_corresponding":false,"name":"Liwen Xu"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"taoshc@mail2.sysu.edu.cn","is_corresponding":false,"name":"Shaocong Tao"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"guanqx3@mail.sysu.edu.cn","is_corresponding":false,"name":"Quanxue Guan"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"zenghp5@mail.sysu.edu.cn","is_corresponding":true,"name":"Haipeng Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haipeng Zeng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1681","time_end":"","time_stamp":"","time_start":"","title":"CSLens: Towards Better Deploying Charging Stations via Visual Analytics \u2014\u2014 A Coupled Networks Perspective","uid":"v-full-1681","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1693":{"abstract":"We introduce a visual analysis method for multiple causality graphs with different outcome variables, namely, multi-outcome causality graphs. Multi-outcome causality graphs are important in healthcare for understanding multimorbidity and comorbidity. To support the visual analysis, we collaborated with medical experts to devise two comparative visualization techniques at different stages of the analysis process. First, a progressive visualization method is proposed for comparing multiple state-of-the-art causal discovery algorithms. The method can handle mixed-type datasets comprising both continuous and categorical variables and assist in the creation of a fine-tuned causality graph of a single outcome. Second, a comparative graph layout technique and specialized visual encodings are devised for the quick comparison of multiple causality graphs. In our visual analysis approach, analysts start by building individual causality graphs for each outcome variable, and then, multi-outcome causality graphs are generated and visualized with our comparative technique for analyzing differences and commonalities of these causality graphs. Evaluation includes quantitative measurements on benchmark datasets, a case study with a medical expert, and expert user studies with real-world health research data.","accessible_pdf":false,"authors":[{"affiliations":["Institute of Medical Technology, Peking University Health Science Center, Beijing, China","National Institute of Health Data Science, Peking University, Beijing, China"],"email":"mengjiefan@bjmu.edu.cn","is_corresponding":true,"name":"Mengjie Fan"},{"affiliations":["Beihang University, Beijing, China","Peking University, Beijing, China"],"email":"yu.jinlu@qq.com","is_corresponding":false,"name":"Jinlu Yu"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"},{"affiliations":["Tongji College of Design and Innovation, Shanghai, China"],"email":"nan.cao@gmail.com","is_corresponding":false,"name":"Nan Cao"},{"affiliations":["Beijing University of Chinese Medicine, Beijing, China"],"email":"wanghuaiyuelva@126.com","is_corresponding":false,"name":"Huaiyu Wang"},{"affiliations":["Peking University, Beijing, China"],"email":"zhoulng@pku.edu.cn","is_corresponding":false,"name":"Liang Zhou"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mengjie Fan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1693","time_end":"","time_stamp":"","time_start":"","title":"Visual Analysis of Multi-outcome Causal Graphs","uid":"v-full-1693","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1699":{"abstract":"Room-scale immersive data visualisations provide viewers a wide-scale overview of a large dataset, but to interact precisely with individual data points they typically have to navigate to change their point of view. In traditional screen-based visualisations, focus-and-context techniques allow visualisation users to keep a full dataset in view while making detailed selections. Such techniques have been studied extensively on desktop to allow precise selection within large data sets, but they have not been explored in immersive 3D modalities. In this paper we develop a novel immersive focus-and-context technique based on a ``magic portal'' metaphor adapted specifically for data visualisation scenarios. An extendable-hand interaction technique is used to place a portal close to the region of interest. The other end of the portal then opens comfortably within the user's physical reach such that they can reach through to precisely select individual data points. Through a controlled study with 24 participants, we find strong evidence that portals reduce overshoots in selection and overall hand trajectory length, reducing arm fatigue compared to ranged interaction without the portal. The portals also enable us to use a robot arm to provide haptic feedback for data within the limited volume of the portal region. We demonstrate applications for portal-based selection through two use-case scenarios.","accessible_pdf":false,"authors":[{"affiliations":["Monash University, Melbourne, Australia"],"email":"dai.shaozhang@gmail.com","is_corresponding":true,"name":"Shaozhang Dai"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"yi.li5@monash.edu","is_corresponding":false,"name":"Yi Li"},{"affiliations":["The University of British Columbia (Okanagan Campus), Kelowna, Canada"],"email":"barrett.ens@ubc.ca","is_corresponding":false,"name":"Barrett Ens"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":false,"name":"Lonni Besan\u00e7on"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"tgdwyer@gmail.com","is_corresponding":false,"name":"Tim Dwyer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shaozhang Dai"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1699","time_end":"","time_stamp":"","time_start":"","title":"Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context","uid":"v-full-1699","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1705":{"abstract":"Contour trees describe the topology of level sets in scalar fields and are widely used in topological data analysis and visualization. A main challenge for utilizing contour trees for large-scale scientific data is their computation at scale using high-performance computing. To address this challenge, recent work has introduced distributed hierarchical contour trees for distributed computation and storage of contour trees. However, effective use of these distributed structures in analysis and visualization requires subsequent computation of geometric properties and branch decomposition to support contour extraction and exploration. In this work, we introduce distributed algorithms for augmentation, hypersweeps, and branch decomposition that enable parallel computation of geometric properties, and support the use of distributed contour trees as a query structure for scientific exploration. We evaluate the parallel performance of these algorithms and apply them to identify and extract important contours for scientific visualization.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"mingzhefluorite@gmail.com","is_corresponding":true,"name":"Mingzhe Li"},{"affiliations":["University of Leeds, Leeds, United Kingdom"],"email":"h.carr@leeds.ac.uk","is_corresponding":false,"name":"Hamish Carr"},{"affiliations":["Lawrence Berkeley National Laboratory, Berkeley, United States"],"email":"oruebel@lbl.gov","is_corresponding":false,"name":"Oliver R\u00fcbel"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"},{"affiliations":["Lawrence Berkeley National Laboratory, Berkeley, United States"],"email":"ghweber@lbl.gov","is_corresponding":false,"name":"Gunther H Weber"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mingzhe Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1705","time_end":"","time_stamp":"","time_start":"","title":"Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration","uid":"v-full-1705","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1708":{"abstract":"The widespread use of Deep Neural Networks (DNNs) has recently resulted in their application to challenging scientific visualization tasks. While advanced DNNs demonstrate impressive generalization abilities, understanding factors like prediction quality, confidence, robustness, and uncertainty is crucial. These insights aid application scientists in making informed decisions. However, DNNs lack inherent mechanisms to measure prediction uncertainty, prompting the creation of distinct frameworks for constructing robust uncertainty-aware models tailored to various visualization tasks. In this work, we develop uncertainty-aware implicit neural representations to model steady-state vector fields effectively. We comprehensively evaluate the efficacy of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout, aimed at enabling uncertainty-informed visual analysis of features within steady vector field data. Our detailed exploration using several vector data sets indicate that uncertainty-aware models generate informative visualization results of vector field features. Furthermore, incorporating prediction uncertainty improves the resilience and interpretability of our DNN model, rendering it applicable for the analysis of complex vector field data sets.","accessible_pdf":false,"authors":[{"affiliations":["Indian Institute of Technology Kanpur , Kanpur, India"],"email":"atulkrfcb@gmail.com","is_corresponding":false,"name":"Atul Kumar"},{"affiliations":["Indian Institute of Technology Kanpur , Kanpur , India"],"email":"gsiddharth2209@gmail.com","is_corresponding":false,"name":"Siddharth Garg"},{"affiliations":["Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India"],"email":"soumya.cvpr@gmail.com","is_corresponding":true,"name":"Soumya Dutta"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Soumya Dutta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1708","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data","uid":"v-full-1708","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1726":{"abstract":"User experience in data visualization is typically assessed through post-viewing self-reports, but these overlook the dynamic cognitive processes during interaction. This study explores the use of mind wandering as a dynamic measure during visualization exploration. Participants reported mind wandering while viewing visualizations from a pre-labeled visualization database and then provided quantitative ratings of trust, engagement, and design quality, along with qualitative descriptions and short-term/long-term recall assessments. Results show that mind wandering negatively affects short-term visualization recall and various post-viewing measures, particularly for visualizations with little text annotation. Further, the type of mind wandering impacts engagement and emotional response. Mind wandering also acts as a serial mediator between visualization design elements and post-viewing measures. Overall, this research underscores the importance of incorporating mind wandering as a dynamic measure in visualization design and evaluation, offering novel avenues for enhancing user engagement and comprehension.","accessible_pdf":false,"authors":[{"affiliations":["Arizona State University, Tempe, United States"],"email":"aarunku5@asu.edu","is_corresponding":true,"name":"Anjana Arunkumar"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"},{"affiliations":["Arizona State University, Tempe, United States"],"email":"cbryan16@asu.edu","is_corresponding":false,"name":"Chris Bryan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anjana Arunkumar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1726","time_end":"","time_stamp":"","time_start":"","title":"Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations","uid":"v-full-1726","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1730":{"abstract":"Understanding the input and output of data wrangling scripts is crucial for various tasks like debugging codes and onboarding new data. However, existing research on script understanding primarily focuses on revealing the process of data transformations, lacking the ability to analyze the potential scope, i.e., the space of script inputs and outputs. Meanwhile, constructing input/output space during script analysis is challenging, as the wrangling scripts could be semantically complex and diverse, and the association between different data objects is intricate. To facilitate data workers in understanding the input and output spaces of wrangling scripts, we summarize ten types of constraints to express table spaces, and build a mapping between data transformations and these constraints to guide the construction of the input/output for individual transformations. Then, we propose a constraint generation model for integrating table constraints across multiple transformations. Based on the model, we develop Ferry, an interactive system that extracts and visualizes the data constraints describing the input and output spaces of data wrangling scripts, thereby enabling users to grasp the high-level semantics of complex scripts and locate the origins of faulty data transformations. Besides, Ferry provides example input and output data to assist users in interpreting the extracted constraints, checking and resolving the conflicts between these constraints and any uploaded dataset. Ferry's effectiveness and usability are evaluated via a usage scenario and two case studies: the first assists users in onboarding new data and debugging scripts, while the second verifies input-output compatibility across data processing modules. Furthermore, an illustrative application is presented to demonstrate Ferry's flexibility.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"rickyluozs@gmail.com","is_corresponding":true,"name":"Zhongsu Luo"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"kaixiong@zju.edu.cn","is_corresponding":false,"name":"Kai Xiong"},{"affiliations":["Zhejiang University, Hangzhou,Zhejiang, China"],"email":"3220105578@zju.edu.cn","is_corresponding":false,"name":"Jiajun Zhu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"chenran928@zju.edu.cn","is_corresponding":false,"name":"Ran Chen"},{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":["Zhejiang University, Ningbo, China"],"email":"dweng@zju.edu.cn","is_corresponding":false,"name":"Di Weng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zhongsu Luo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1730","time_end":"","time_stamp":"","time_start":"","title":"Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts","uid":"v-full-1730","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1738":{"abstract":"As a step towards improving visualization literacy, we investigated how students approach reading visualizations differently after taking a university-level visualization course. We asked students to verbally walk through their process of making sense of unfamiliar visualizations, and conducted a qualitative analysis of these walkthroughs. Our qualitative analysis found changes in students' walkthroughs consistent with explicit learning goals of visualization courses. After taking a visualization course, students also engaged with visualizations in more sophisticated ways not fully captured by explicit learning goals: they were more likely to exhibit design empathy by thinking critically about the tradeoffs behind why a chart was designed in a particular way, and were better able to deconstruct a chart to make sense of it. We also gave students a quantitative assessment of visualization literacy and found no evidence of scores improving after the class, likely because the test we used focused on a different set of skills than those emphasized in visualization classes. While current measurement instruments for visualization literacy are useful, we propose developing standardized assessments for additional aspects of visualization literacy, such as deconstruction and design empathy. We also suggest those additional aspects could be made more explicit in learning goals set by visualization educators. All supplemental materials are available at https://osf.io/w5pum/?view_only=f9eca3fa4711425582d454031b9c482e.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"maryam.hedayati@u.northwestern.edu","is_corresponding":true,"name":"Maryam Hedayati"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maryam Hedayati"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1738","time_end":"","time_stamp":"","time_start":"","title":"What University Students Learn In Visualization Classes","uid":"v-full-1738","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1746":{"abstract":"Hypergraphs provide a natural way to represent polyadic relationships in network data. For large hypergraphs, it is often difficult to visually detect structures within the data. Recently, a scalable polygon-based visualization framework was developed allowing hypergraphs with thousands of hyperedges to be simplified and examined at different levels of detail. However, this approach does not consider structures such as cycles, bridges, and branches. Consequently, structures can be lost at simplified scales, making interpretations for real-world applications unreliable. In this paper, we define hypergraph structures using the bipartite graph representation. Powered by our analysis, we provide an algorithm to decompose large hypergraphs into meaningful features and to identify regions of non-planarity. We also introduce a set of topology preserving and topology altering atomic operations, enabling the preservation of important structures while removing topological noise in simplified scales. We demonstrate our approach in several real-world applications.","accessible_pdf":false,"authors":[{"affiliations":["Oregon State University, Corvallis, United States"],"email":"oliverpe@oregonstate.edu","is_corresponding":false,"name":"Peter D Oliver"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhange@eecs.oregonstate.edu","is_corresponding":true,"name":"Eugene Zhang"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhangyue@oregonstate.edu","is_corresponding":false,"name":"Yue Zhang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Eugene Zhang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1746","time_end":"","time_stamp":"","time_start":"","title":"Structure-Aware Simplification for Hypergraph Visualization","uid":"v-full-1746","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1770":{"abstract":"The semantic similarity between documents of a text corpus can be visualized using map-like metaphors based on two-dimensional scatterplot layouts. These layouts result from a dimensionality reduction on the document-term matrix or a representation within a latent embedding, including topic models. Thereby, the resulting layout depends on the input data and hyperparameters of the dimensionality reduction and is therefore affected by changes in them. Furthermore, the resulting layout is affected by changes in the input data and hyperparameters of the dimensionality reduction. However, such changes to the layout require additional cognitive efforts from the user. In this work, we present a sensitivity study that analyzes the stability of these layouts concerning (1) changes in the text corpora, (2) changes in the hyperparameter, and (3) randomness in the initialization. Our approach has two stages: data measurement and data analysis. First, we derived layouts for the combination of three text corpora and six text embeddings and a grid-search-inspired hyperparameter selection of the dimensionality reductions. Afterward, we quantified the similarity of the layouts through ten metrics, concerning local and global structures and class separation. Second, we analyzed the resulting 42817 tabular data points in a descriptive statistical analysis. From this, we derived guidelines for informed decisions on the layout algorithm and highlight specific hyperparameter settings. We provide our implementation and results as a Git repository at https://github.com/hpicgs/Topic-Models-and-Dimensionality-Reduction-Sensitivity-Study .","accessible_pdf":false,"authors":[{"affiliations":["University of Potsdam, Digital Engineering Faculty, Hasso Plattner Institute, Potsdam, Germany"],"email":"daniel.atzberger@hpi.de","is_corresponding":true,"name":"Daniel Atzberger"},{"affiliations":["University of Potsdam, Potsdam, Germany"],"email":"tcech@uni-potsdam.de","is_corresponding":false,"name":"Tim Cech"},{"affiliations":["Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany"],"email":"willy.scheibel@hpi.de","is_corresponding":false,"name":"Willy Scheibel"},{"affiliations":["Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany"],"email":"juergen.doellner@hpi.de","is_corresponding":false,"name":"J\u00fcrgen D\u00f6llner"},{"affiliations":["Utrecht University, Utrecht, Netherlands"],"email":"m.behrisch@uu.nl","is_corresponding":false,"name":"Michael Behrisch"},{"affiliations":["Graz University of Technology, Graz, Austria"],"email":"tobias.schreck@cgv.tugraz.at","is_corresponding":false,"name":"Tobias Schreck"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Atzberger"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1770","time_end":"","time_stamp":"","time_start":"","title":"A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations","uid":"v-full-1770","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1793":{"abstract":"This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale complexes, and the discrepancies in topology between original and decompressed datasets could potentially result in erroneous interpretations or even incorrect scientific conclusions. In this paper, we focus on preserving Morse-Smale segmentations in 2D/3D piecewise linear scalar fields, targeting the precise reconstruction of minimum/maximum labels induced by the integral curve of each vertex. The key is to derive a series of edits during compression time; the edits are applied to the decompressed data, leading to an accurate reconstruction of segmentations while keeping the error within the prescribed error bound. To this end, we developed a workflow to fix extrema and integral curves alternatively until convergence within finite iterations; we accelerate each workflow component with shared-memory/GPU parallelism to make the performance practical for coupling with compressors. We demonstrate use cases with fluid dynamics, ocean, and cosmology application datasets with a 1000x acceleration with an NVIDIA A100 GPU.","accessible_pdf":false,"authors":[{"affiliations":["The Ohio State University, Columbus, United States"],"email":"li.14025@osu.edu","is_corresponding":true,"name":"Yuxiao Li"},{"affiliations":["University of California, Riverside, Riverside, United States"],"email":"xlian007@ucr.edu","is_corresponding":false,"name":"Xin Liang"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"qiu.722@osu.edu","is_corresponding":false,"name":"Yongfeng Qiu"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"lyan@anl.gov","is_corresponding":false,"name":"Lin Yan"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuxiao Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1793","time_end":"","time_stamp":"","time_start":"","title":"MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors","uid":"v-full-1793","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1802":{"abstract":"In the biomedical domain, visualizing the document embeddings of an extensive corpus has been widely used in information-seeking tasks. However, three key challenges with existing visualizations make it difficult for clinicians to find information efficiently. First, the document embeddings used in these visualizations are generated statically by pretrained language models, which cannot adapt to the user's evolving interest. Second, existing document visualization techniques cannot effectively display how the documents are relevant to users\u2019 interest, making it difficult for users to identify the most pertinent information. Third, existing embedding generation and visualization processes suffer from a lack of interpretability, making it difficult to understand, trust and use the result for decision-making. In this paper, we present a novel visual analytics pipeline for user-driven document representation and iterative information seeking (VADIS). VADIS introduces a prompt-based attention model (PAM) that generates dynamic document embedding and document relevance adjusted to the user's query. To effectively visualize these two pieces of information, we design a new document map that leverages a circular grid layout to display documents based on both their relevance to the query and the semantic similarity. Additionally, to improve the interpretability, we introduce a corpus-level attention visualization method to improve the user's understanding of the model focus and to enable the users to identify potential oversight. This visualization, in turn, empowers users to refine, update and introduce new queries, thereby facilitating a dynamic and iterative information-seeking experience. We evaluated VADIS quantitatively and qualitatively on a real-world dataset of biomedical research papers to demonstrate its effectiveness.","accessible_pdf":false,"authors":[{"affiliations":["Ohio State University, Columbus, United States"],"email":"qiu.580@buckeyemail.osu.edu","is_corresponding":true,"name":"Rui Qiu"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"tu.253@osu.edu","is_corresponding":false,"name":"Yamei Tu"},{"affiliations":["Washington University School of Medicine in St. Louis, St. Louis, United States"],"email":"yenp@wustl.edu","is_corresponding":false,"name":"Po-Yin Yen"},{"affiliations":["The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Rui Qiu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1802","time_end":"","time_stamp":"","time_start":"","title":"VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking","uid":"v-full-1802","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1803":{"abstract":"Scalar field comparison is a fundamental task in scientific visualization. In topological data analysis, we compare topological descriptors of scalar fields---such as persistence diagrams and merge trees---as they provide succinct and robust abstract representations. While several similarity measures for topological descriptors seem to be both asymptotically and practically efficient with polynomial time algorithms, they do not scale well when handling large-scale, time-varying scientific data and ensembles. In this paper, we propose a new framework to facilitate the comparative analysis of merge trees, inspired by tools from locality sensitive hashing (LSH). LSH hashes similar objects into the same hash buckets with high probability. We propose two new similarity measures for merge trees that can be computed via LSH, using new extensions to Recursive MinHash and subpath signature, respectively. Our similarity measures are extremely efficient to compute and closely resemble the results of existing measures such as merge tree edit distance or geometric interleaving distance. Our experiments demonstrate the utility of our LSH framework in applications such as shape matching, clustering, key event detection, and ensemble summarization.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, SALT LAKE CITY, United States"],"email":"lyuweiran@gmail.com","is_corresponding":false,"name":"Weiran Lyu"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"g.s.raghavendra@gmail.com","is_corresponding":true,"name":"Raghavendra Sridharamurthy"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jeffp@cs.utah.edu","is_corresponding":false,"name":"Jeff M. Phillips"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Raghavendra Sridharamurthy"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1803","time_end":"","time_stamp":"","time_start":"","title":"Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing","uid":"v-full-1803","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1805":{"abstract":"he optimization of cooling systems is important in many cases, for example for cabin and battery cooling in electric cars. Such an optimization is governed by multiple, conflicting objectives and it is performed across a multi-dimensional parameter space. The extent of the parameter space, the complexity of the non-linear model of the system, as well as the time needed per simulation run and factors that are not modeled in the simulation necessitate an iterative, semi-automatic approach. We present an interactive visual optimization approach, where the user works with a p-h diagram to steer an iterative, guided optimization process. A deep learning (DL) model provides estimates for parameters, given a target characterization of the system, while numerical simulation is used to predict system characteristics for an ensemble of parameter sets. Since the DL model only serves as an approximation of the inverse of the cooling system and since target characteristics can be chosen according to different, competing objectives, an iterative optimization process is realized, developing multiple sets of intermediate solutions, which are visually related to each other. The standard p-h diagram, integrated interactively in this approach, is complemented by a dual, also interactive visual representation of additional expressive measures representing the system characteristics. We show how the known four-points semantic of the p-h diagram meaningfully transfers to the dual data representation. When evaluating this approach with our partners in the automotive domain, we found that our solution helped with the overall comprehension of the cooling system and that it lead to a faster convergence during optimization.","accessible_pdf":false,"authors":[{"affiliations":["VRVis Research Center, Vienna, Austria"],"email":"splechtna@vrvis.at","is_corresponding":false,"name":"Rainer Splechtna"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"behravan@vt.edu","is_corresponding":false,"name":"Majid Behravan"},{"affiliations":["AVL AST doo, Zagreb, Croatia"],"email":"mario.jelovic@avl.com","is_corresponding":false,"name":"Mario Jelovic"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"gracanin@vt.edu","is_corresponding":false,"name":"Denis Gracanin"},{"affiliations":["University of Bergen, Bergen, Norway"],"email":"helwig.hauser@uib.no","is_corresponding":false,"name":"Helwig Hauser"},{"affiliations":["VRVis Research Center, Vienna, Austria"],"email":"matkovic@vrvis.at","is_corresponding":true,"name":"Kresimir Matkovic"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kresimir Matkovic"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1805","time_end":"","time_stamp":"","time_start":"","title":"Interactive Design-of-Experiments: Optimizing a Cooling System","uid":"v-full-1805","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1809":{"abstract":"Visualizing relational data is crucial for understanding complex connections between entities in social networks, political affiliations, or biological interactions. Well-known representations like node-link diagrams and adjacency matrices offer valuable insights, but their effectiveness relies on the ability to identify patterns in the underlying topological structure. Reordering strategies and layout algorithms play a vital role in the visualization process since the arrangement of nodes, edges, or cells influences the visibility of these patterns. The BioFabric visualization combines elements of node-link diagrams and adjacency matrices, leveraging the strengths of both, the visual clarity of node-link diagrams and the tabular organization of adjacency matrices. A unique characteristic of BioFabric is the possibility to reorder nodes and edges separately. This raises the question of which combination of layout algorithms best reveals certain patterns. In this paper, we discuss patterns and anti-patterns in BioFabric, such as staircases or escalators, relate them to already established patterns, and propose metrics to evaluate their quality. Based on these quality metrics, we compared combinations of well-established reordering techniques applied to BioFabric with a well-known benchmark data set. Our experiments indicate that the edge order has a stronger influence on revealing patterns than the node layout. The results show that the best combination for revealing staircases is a barycentric node layout, together with an edge order based on node indices and length. Our research contributes a first building block for many promising future research directions, which we also share and discuss. A free copy of this paper and all supplemental materials are available at OSF.","accessible_pdf":false,"authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"fuchs@dbvis.inf.uni-konstanz.de","is_corresponding":true,"name":"Johannes Fuchs"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"alexander.frings@uni-konstanz.de","is_corresponding":false,"name":"Alexander Frings"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"maria-viktoria.heinle@uni-konstanz.de","is_corresponding":false,"name":"Maria-Viktoria Heinle"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"sara.di-bartolomeo@uni-konstanz.de","is_corresponding":false,"name":"Sara Di Bartolomeo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Johannes Fuchs"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1809","time_end":"","time_stamp":"","time_start":"","title":"Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations","uid":"v-full-1809","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1810":{"abstract":"Classical bibliography, by scrutinizing preserved catalogs from both official archives and personal collections of accumulated books, examines the books throughout history, thereby elucidating cultural development across historical periods. In this work, we collaborate with domain experts to accomplish the task of data annotation concerning Chinese ancient catalogs. We introduce the CataAnno system that facilitates users in completing annotations more efficiently through cross-linked views, recommendation methods and convenient annotation interactions. The recommendation method can learn the background knowledge and annotation patterns that experts subconsciously integrate into the data during prior annotation processes. CataAnno searches for the most relevant examples previously annotated and recommends to the user. Meanwhile, the cross-linked views assist users in comprehending the correlations between entries and offer explanations for these recommendations. Evaluation and expert feedback confirm that the CataAnno system, by offering high-quality recommendations and visualizing the relationships between entries, can mitigate the necessity for specialized knowledge during the annotation process. This results in enhanced accuracy and consistency in annotations, thereby enhancing the overall efficiency","accessible_pdf":false,"authors":[{"affiliations":["Peking University, Beijing, China"],"email":"hanning.shao@pku.edu.cn","is_corresponding":true,"name":"Hanning Shao"},{"affiliations":["Peking University, Beijing, China"],"email":"xiaoru.yuan@pku.edu.cn","is_corresponding":false,"name":"Xiaoru Yuan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hanning Shao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1810","time_end":"","time_stamp":"","time_start":"","title":"CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation","uid":"v-full-1810","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1830":{"abstract":"Over the past decade, several urban visual analytics systems have been proposed to tackle a host of challenges faced by cities, in areas as diverse as transportation, weather, and real estate. Many of these systems have been designed through engagement with urban experts, aiming to distill intricate urban analysis workflows into interactive visualizations and interfaces. The design, implementation, and practical use of these systems, however, still rely on siloed approaches that lead to bespoke tools that are hard to reproduce and extend. At the design level, these systems undervalue rich data workflows from urban experts by usually only treating them as data providers and evaluators. At the implementation level, these systems lack interoperability with other technical frameworks. At the practical use level, these systems tend to be narrowly focused on specific fields, inadvertently creating barriers for cross-domain collaboration. To tackle these gaps, we present Curio, a framework for collaborative urban visual analytics. Curio uses a dataflow model with multiple abstraction levels (code, grammar, GUI elements) to facilitate collaboration across the design and implementation of visual analytics components. The framework allows experts to intertwine preprocessing, managing, and visualization stages while tracking provenance of code and visualizations. In collaboration with urban experts, we evaluate Curio through a diverse series of use cases targeting urban accessibility, urban microclimate, and sunlight access. These cases use different types of urban data and domain methodologies to illustrate Curio's flexibility in tackling pressing societal challenges.","accessible_pdf":false,"authors":[{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"gmorei3@uic.edu","is_corresponding":false,"name":"Gustavo Moreira"},{"affiliations":["Massachusetts Institute of Technology , Somerville, United States"],"email":"maryamh@mit.edu","is_corresponding":false,"name":"Maryam Hosseini"},{"affiliations":["University of Illinois Urbana-Champaign, Urbana-Champaign, United States"],"email":"carolinavfs@id.uff.br","is_corresponding":false,"name":"Carolina Veiga Ferreira de Souza"},{"affiliations":["Universidade Federal Fluminense, Niteroi, Brazil"],"email":"lucasalexandre.s.cc@gmail.com","is_corresponding":false,"name":"Lucas Alexandre"},{"affiliations":["Politecnico di Milano, Milano, Italy"],"email":"nicola.colaninno@polimi.it","is_corresponding":false,"name":"Nicola Colaninno"},{"affiliations":["Universidade Federal Fluminense, Niter\u00f3i, Brazil"],"email":"danielcmo@ic.uff.br","is_corresponding":false,"name":"Daniel de Oliveira"},{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"nivan@cin.ufpe.br","is_corresponding":false,"name":"Nivan Ferreira"},{"affiliations":["Universidade Federal Fluminense , Niteroi, Brazil"],"email":"mlage@ic.uff.br","is_corresponding":false,"name":"Marcos Lage"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"fabiom@uic.edu","is_corresponding":true,"name":"Fabio Miranda"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fabio Miranda"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1830","time_end":"","time_stamp":"","time_start":"","title":"Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics","uid":"v-full-1830","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1831":{"abstract":"When using exploratory visual analysis to examine multivariate hierarchical data, users often need to query data to narrow down the scope of analysis. However, formulating effective query expressions remains a challenge for multivariate hierarchical data, particularly when datasets become very large. To address this issue, we develop a declarative grammar, HiRegEx (Hierarchical data Regular Expression), for querying and exploring multivariate hierarchical data. Rooted in the extended multi-level task topology framework for tree visualizations (e-MLTT), HiRegEx delineates three query targets (node, path, and subtree) and two aspects for querying these targets (features and positions), and uses operators developed based on classical regular expressions for query construction. We develop a prototype system, TreeQueryER, to integrate an exploratory framework for querying and exploring multivariate hierarchical data based on HiRegEx. The exploratory framework includes three major components: top-down pattern specification, bottom-up data-driven inquiry, and context-creation data overview. We validate the expressiveness of HiRegEx with the tasks from the e-MLTT framework and showcase its utility and effectiveness through a usage scenario involving expert users in the analysis of a citation tree dataset.","accessible_pdf":false,"authors":[{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"guozhg.li@gmail.com","is_corresponding":true,"name":"Guozheng Li"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"haotian.mi1@gmail.com","is_corresponding":false,"name":"haotian mi"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"liuchi02@gmail.com","is_corresponding":false,"name":"Chi Harold Liu"},{"affiliations":["Ochanomizu University, Tokyo, Japan"],"email":"itot@is.ocha.ac.jp","is_corresponding":false,"name":"Takayuki Itoh"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"wanggrbit@126.com","is_corresponding":false,"name":"Guoren Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Guozheng Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1831","time_end":"","time_stamp":"","time_start":"","title":"HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data","uid":"v-full-1831","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1833":{"abstract":"The concept of an intelligent augmented reality (AR) assistant has applications as significant as they are wide-ranging, with potential uses in medicine, military endeavors, and mechanics. Such an assistant must be able to perceive the performer\u2019s environment and actions, reason about the state of the environment in relation to a given task, and seamlessly interact with the performer. These interactions typically involve an AR headset equipped with a variety of sensors which capture video, audio, and haptic feedback. Previous works have sought to facilitate the development of such an assistant by visualizing these sensor data streams as well as the machine learning model outputs that support an assistant\u2019s perception and reasoning capabilities. However, existing visual analytics systems do not include biometric data or focus on user modeling, and are only capable of visualizing a single task session for a single performer at a time. Furthermore, they mainly focus on traditional task analysis that typically assumes a linear progression from one step to the next. We propose a visual analytics system that allows users to compare performance during multiple task sessions focusing on non-linear tasks where different paths or sequences can lead to the successful completion of the task. In particular, we design visualizations for understanding user behavior through functional near-infrared spectroscopy (fNIRS) data as a proxy for perception, attention, and memory as well as corresponding motion data (acceleration, angular velocity, and eye gaze). We distill these insights into visual embeddings that allow users to easily select groups of sessions with similar behaviors. We provide case studies that explore how insights into task performance can be gleaned from these visualizations using data collected during helicopter copilot training tasks. Finally, we evaluate our approach by conducting an in-depth examination of a think-aloud experiment with five domain experts.","accessible_pdf":false,"authors":[{"affiliations":["New York University, New York, United States"],"email":"s.castelo@nyu.edu","is_corresponding":true,"name":"Sonia Castelo Quispe"},{"affiliations":["New York University, New York, United States"],"email":"jlrulff@gmail.com","is_corresponding":false,"name":"Jo\u00e3o Rulff"},{"affiliations":["New York University, Brooklyn, United States"],"email":"pss442@nyu.edu","is_corresponding":false,"name":"Parikshit Solunke"},{"affiliations":["New York University, New York, United States"],"email":"erin.mcgowan@nyu.edu","is_corresponding":false,"name":"Erin McGowan"},{"affiliations":["New York University, New York CIty, United States"],"email":"guandewu@nyu.edu","is_corresponding":false,"name":"Guande Wu"},{"affiliations":["New York University, Brooklyn, United States"],"email":"iran@ccrma.stanford.edu","is_corresponding":false,"name":"Iran Roman"},{"affiliations":["New York University, New York, United States"],"email":"rlopez@nyu.edu","is_corresponding":false,"name":"Roque Lopez"},{"affiliations":["New York University, Brooklyn, United States"],"email":"bs3639@nyu.edu","is_corresponding":false,"name":"Bea Steers"},{"affiliations":["New York University, New York, United States"],"email":"qisun@nyu.edu","is_corresponding":false,"name":"Qi Sun"},{"affiliations":["New York University, New York, United States"],"email":"jpbello@nyu.edu","is_corresponding":false,"name":"Juan Pablo Bello"},{"affiliations":["Northrop Grumman Mission Systems, Redondo Beach, United States"],"email":"bradley.feest@ngc.com","is_corresponding":false,"name":"Bradley S Feest"},{"affiliations":["Northrop Grumman, Aurora, United States"],"email":"michael.middleton@ngc.com","is_corresponding":false,"name":"Michael Middleton"},{"affiliations":["Northrop Grumman, Falls Church, United States"],"email":"ryan.mckendrick@ngc.com","is_corresponding":false,"name":"Ryan McKendrick"},{"affiliations":["New York University, New York City, United States"],"email":"csilva@nyu.edu","is_corresponding":false,"name":"Claudio Silva"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sonia Castelo Quispe"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1833","time_end":"","time_stamp":"","time_start":"","title":"HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems","uid":"v-full-1833","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1836":{"abstract":"Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Although shapes can be a finite number compared to colors, they can not be represented by a numerical space, making it difficult to propose a general guideline for shape choices or shed light on the design heuristics of designer-crafted shape palettes. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks -- relative mean judgment tasks, expert choices, and data correlation estimation. Given how complex and tangled results are, rather than relying on conventional features for modeling, we built a model and introduced a corresponding design tool that offers recommendations for shape encodings. The perceptual effectiveness of shapes significantly varies across specific pairs, and certain shapes may enhance perceptual efficiency and accuracy. However, how performance varies does not map well to classical features of shape such as angles, fill, or convex hull. We developed a model based on pairwise relations between shapes measured in our experiments and the number of shapes required to intelligently recommend shape palettes for a given design. This tool provides designers with agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances the understanding of shape perception in visualization contexts and provides practical design guidelines for advanced shape usage in visualization design that optimize perceptual efficiency.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"chint@cs.unc.edu","is_corresponding":true,"name":"Chin Tseng"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":false,"name":"Ghulam Jilani Quadri"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chin Tseng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1836","time_end":"","time_stamp":"","time_start":"","title":"An Empirically Grounded Approach for Designing Shape Palettes","uid":"v-full-1836","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1865":{"abstract":"In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics (IVD) consumables poses a significant threat to patients. Objective data-driven decision making on the severity of contamination is key for reducing risk to patients, while saving time and cost in the quality assessment process. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings of the current process are analysis problems, like weak support in exploring thousands of particle images, associated attributes, and ineffective knowledge externalization for sense-making. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study\u2019s learnings, and a generalizable approach for knowledge externalization. DaedalusData is a visual analytics system that empowers domain experts to explore particle contamination patterns, to label particles in label alphabets, and to externalize knowledge through semi-supervised label-informed data projections. The results of our case study show that DaedalusData supports experts in generating meaningful, comprehensive data overviews. Additionally, our user study evaluation shows high usability of DaedalusData and efficiently supports the labeling of large quantities of particles, and utilizes externalized knowledge to augment the dataset. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalabilty and trade-offs that come with the adoption of this approach in practice.","accessible_pdf":false,"authors":[{"affiliations":["University of Z\u00fcrich, Z\u00fcrich, Switzerland","Roche pRED, Basel, Switzerland"],"email":"alexander.wyss@protonmail.com","is_corresponding":true,"name":"Alexander Wyss"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"gab.morgenshtern@gmail.com","is_corresponding":false,"name":"Gabriela Morgenshtern"},{"affiliations":["Roche Diagnostics International, Rotkreuz, Switzerland"],"email":"a.hirschhuesler@gmail.com","is_corresponding":false,"name":"Amanda Hirsch-H\u00fcsler"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"bernard@ifi.uzh.ch","is_corresponding":false,"name":"J\u00fcrgen Bernard"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alexander Wyss"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1865","time_end":"","time_stamp":"","time_start":"","title":"DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study","uid":"v-full-1865","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1866":{"abstract":"Feature grid Scene Representation Networks (SRNs) have been applied to scientific data as compact functional surrogates for analysis and visualization. As SRNs are black-box lossy data representations, assessing the prediction quality is critical for scientific visualization applications to ensure that scientists can trust the information being visualized. Currently, existing architectures do not support inference time reconstruction quality assessment, as voxel-wise errors cannot be evaluated in the absence of ground truth data. By employing uncertain neural network architectures in feature grid SRNs, we obtain prediction variances during inference time to facilitate confidence-aware data reconstruction. Specifically, we propose a parameter-efficient multi-decoder Ensemble SRN (E-SRN) architecture consisting of a shared feature grid with multiple lightweight multi-layer perceptron decoders. E-SRN can generate a set of plausible predictions for a given input coordinate to compute the mean as the ensemble prediction and the variance as a confidence score. The voxel-wise variance can be rendered along with the data to inform the reconstruction quality, or be integrated into uncertainty-aware volume visualization algorithms. To prevent the misalignment between the quantified variance and the prediction quality, we propose a novel variance regularization loss for ensemble learning that promotes the Regularized Ensemble SRN (RE-SRN) to obtain a more reliable variance that correlates closely to the true model error. We comprehensively evaluate the quality of variance quantification and data reconstruction of Monte Carlo Dropout (MCD), Mean Field Variational Inference (MFVI), Deep Ensemble (DE), and Predicting Variance (PV) in comparison with our proposed E-SRN and RE-SRN applied to state-of-the-art feature grid SRNs across diverse scalar field datasets. We demonstrate that RE-SRN attains the most accurate data reconstruction and competitive variance-error correlation among uncertain SRNs under the same neural network parameter budgets. Furthermore, we present an adaptation of uncertainty-aware volume rendering and shed light on the potential of incorporating uncertain predictions in improving the quality of volume rendering for uncertain SRNs. Through ablation studies on the regularization strength and ensemble size, we show that E-SRN and RE-SRN are expected to perform sufficiently well with a default configuration without requiring customized hyperparameter settings for different datasets.","accessible_pdf":false,"authors":[{"affiliations":["The Ohio State University, Columbus, United States"],"email":"xiong.336@osu.edu","is_corresponding":true,"name":"Tianyu Xiong"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"wurster.18@osu.edu","is_corresponding":false,"name":"Skylar Wolfgang Wurster"},{"affiliations":["The Ohio State University, Columbus, United States","Argonne National Laboratory, Lemont, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"tpeterka@mcs.anl.gov","is_corresponding":false,"name":"Tom Peterka"},{"affiliations":["The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tianyu Xiong"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1866","time_end":"","time_stamp":"","time_start":"","title":"Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network","uid":"v-full-1866","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1874":{"abstract":"A layered network is an important category of graph in which every node is assigned to a layer and layers are drawn as parallel or radial lines. They are commonly used to display temporal data or hierarchical networks. Previous research has demonstrated that minimizing edge crossings is the most important criterion to consider when looking to improve the readability of such networks. While heuristic approaches exist for crossing minimization, we are interested in optimal approaches to the problem that prioritize human readability over computational scalability. We aim to improve the usefulness and applicability of such optimal methods by understanding and improving their scalability to larger graphs. This paper categorizes and evaluates the state-of-the-art linear programming formulations for exact crossing minimization and describes nine new and existing techniques that could plausibly accelerate the optimization algorithm. Through a computational evaluation, we explore each technique's effect on calculation time and how the techniques assist or inhibit one another, allowing researchers and practitioners to adapt them to the characteristics of their networks. Our best-performing techniques yielded a median improvement of 2.5--17x depending on the solver used, giving us the capability to create optimal layouts faster and for larger networks. We provide an open-source implementation of our methodology in Python, where users can pick which combination of techniques to enable according to their use case. A free copy of this paper and all supplemental materials, datasets used, and source code are available at {https://osf.io/}.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"wilson.conn@northeastern.edu","is_corresponding":true,"name":"Connor Wilson"},{"affiliations":["Northeastern University, Boston, United States"],"email":"eduardopuertac@gmail.com","is_corresponding":false,"name":"Eduardo Puerta"},{"affiliations":["northeastern university, Boston, United States"],"email":"turokhunter@gmail.com","is_corresponding":false,"name":"Tarik Crnovrsanin"},{"affiliations":["University of Konstanz, Konstanz, Germany","Northeastern University, Boston, United States"],"email":"sara.di-bartolomeo@uni-konstanz.de","is_corresponding":false,"name":"Sara Di Bartolomeo"},{"affiliations":["Northeastern University, Boston, United States"],"email":"c.dunne@northeastern.edu","is_corresponding":false,"name":"Cody Dunne"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Connor Wilson"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1874","time_end":"","time_stamp":"","time_start":"","title":"Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings","uid":"v-full-1874","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1880":{"abstract":"Merge trees are a valuable tool in scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the merge tree neural networks (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how graph neural networks (GNNs), which emerged as an effective encoder for graphs, can be trained to produce embeddings of merge trees in vector spaces that enable efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100x on the benchmark datasets while maintaining an error rate below 0.1%.","accessible_pdf":false,"authors":[{"affiliations":["Tulane University, New Orleans, United States"],"email":"yqin2@tulane.edu","is_corresponding":true,"name":"Yu Qin"},{"affiliations":["Montana State University, Bozeman, United States"],"email":"brittany.fasy@montana.edu","is_corresponding":false,"name":"Brittany Terese Fasy"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"cwenk@tulane.edu","is_corresponding":false,"name":"Carola Wenk"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"bsumma@tulane.edu","is_corresponding":false,"name":"Brian Summa"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yu Qin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1880","time_end":"","time_stamp":"","time_start":"","title":"Rapid and Precise Topological Comparison with Merge Tree Neural Networks","uid":"v-full-1880","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-full-1917":{"abstract":"The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefits to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifier users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically \u201csee\u201d the chart content. They ascribed these challenges mainly to the magnification-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method significantly improved the usability of charts over both the status quo screen magnifier and a state-of-the-art space compaction-based solution.","accessible_pdf":false,"authors":[{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"yprak001@odu.edu","is_corresponding":true,"name":"Yash Prakash"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"pkhan002@odu.edu","is_corresponding":false,"name":"Pathan Aseef Khan"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"anaya001@odu.edu","is_corresponding":false,"name":"Akshay Kolgar Nayak"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"uksjayarathna@gmail.com","is_corresponding":false,"name":"Sampath Jayarathna"},{"affiliations":["Michigan State University, East Lansing, United States"],"email":"leehaena@msu.edu","is_corresponding":false,"name":"Hae-Na Lee"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"vganjigu@odu.edu","is_corresponding":false,"name":"Vikas Ashok"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yash Prakash"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1917","time_end":"","time_stamp":"","time_start":"","title":"Towards Enhancing Low Vision Usability of Data Charts on Smartphones","uid":"v-full-1917","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1040":{"abstract":"From dirty data to intentional deception, there are many threats to the validity of data-driven decisions. Making use of data, especially new or unfamiliar data, therefore requires a degree of trust or verification. How is this trust established? In this paper, we present the results of a series of interviews with both producers and consumers of data artifacts (outputs of data ecosystems like spreadsheets, charts, and dashboards) aimed at understanding strategies and obstacles to building trust in data. We find a recurring need, but lack of existing standards, for data validation and verification, especially among data consumers. We therefore propose a set of data guards: methods and tools for fostering trust in data artifacts.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"nicole.sultanum@gmail.com","is_corresponding":true,"name":"Nicole Sultanum"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"bromley.denny@gmail.com","is_corresponding":false,"name":"Dennis Bromley"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicole Sultanum"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1040","time_end":"","time_stamp":"","time_start":"","title":"Data Guards: Challenges and Solutions for Fostering Trust in Data","uid":"v-short-1040","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1047":{"abstract":"In the rapidly evolving field of deep learning, the traditional methodologies for designing deep learning models predominantly rely on code-based frameworks. While these approaches provide flexibility, they also create a significant barrier to entry for non-experts and obscure the immediate impact of architectural decisions on model performance. In response to this challenge, recent no-code approaches have been developed with the aim of enabling easy model development through graphical interfaces. However, both traditional and no-code methodologies share a common limitation that the inability to predict model outcomes or identify issues without executing the model. To address this limitation, we introduce an intuitive visual feedback-based no-code approach to visualize and analyze deep learning models during the design phase. This approach utilizes dataflow-based visual programming with dynamic visual encoding of model architecture. A user study was conducted with deep learning developers to demonstrate the effectiveness of our approach in enhancing the model design process, improving model understanding, and facilitating a more intuitive development experience. The findings of this study suggest that real-time architectural visualization significantly contributes to more efficient model development and a deeper understanding of model behaviors.","accessible_pdf":false,"authors":[{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of","Korea University, Seoul, Korea, Republic of"],"email":"juny0603@gmail.com","is_corresponding":true,"name":"JunYoung Choi"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of"],"email":"wings159@vience.co.kr","is_corresponding":false,"name":"Sohee Park"},{"affiliations":["Korea University, Seoul, Korea, Republic of"],"email":"hellenkoh@gmail.com","is_corresponding":false,"name":"GaYeon Koh"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of"],"email":"k0seo0330@vience.co.kr","is_corresponding":false,"name":"Youngseo Kim"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of","Korea University, Seoul, Korea, Republic of"],"email":"wkjeong@korea.ac.kr","is_corresponding":false,"name":"Won-Ki Jeong"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["JunYoung Choi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1047","time_end":"","time_stamp":"","time_start":"","title":"Intuitive Design of Deep Learning Models through Visual Feedback","uid":"v-short-1047","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1049":{"abstract":"This comparative study evaluates various neural surface reconstruction methods, particularly focusing on their implications for scientific visualization through reconstructing 3D surfaces via multi-view rendering images. We categorize ten methods into neural radiance fields and neural implicit surfaces, uncovering the benefits of leveraging distance functions (i.e., SDFs and UDFs) to enhance the accuracy and smoothness of the reconstructed surfaces. Our findings highlight the efficiency and quality of NeuS2 for reconstructing closed surfaces and identify NeUDF as a promising candidate for reconstructing open surfaces despite some limitations. We further pinpoint directions for future research, including improving detail capture, optimizing UDF computations, and refining surface extraction methods. By sharing our benchmark dataset, we invite researchers to test the performance of their methods, contributing to the advancement of surface reconstruction solutions for scientific visualization.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"syao2@nd.edu","is_corresponding":true,"name":"Siyuan Yao"},{"affiliations":["Wuhan University, Wuhan, China"],"email":"song.wx@whu.edu.cn","is_corresponding":false,"name":"Weixi Song"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Siyuan Yao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1049","time_end":"","time_stamp":"","time_start":"","title":"A Comparative Study of Neural Surface Reconstruction for Scientific Visualization","uid":"v-short-1049","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1054":{"abstract":"Direct volume rendering using ray-casting is widely used in practice. By using GPUs and applying acceleration techniques as empty space skipping, high frame rates are possible on modern hardware. This enables performance-critical use-cases such as virtual reality volume rendering. The currently fastest known technique uses volumetric distance maps to skip empty sections of the volume during ray-casting but requires the distance map to be updated per transfer function change. In this paper, we demonstrate a technique for subdividing the volume intensity range into partitions and deriving what we call partitioned distance maps. These can be used to accelerate the distance map computation for a newly changed transfer function by a factor up to 30. This allows the currently fastest known empty space skipping approach to be used while maintaining high frame rates even when the transfer function is changed frequently.","accessible_pdf":false,"authors":[{"affiliations":["University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria"],"email":"michael.rauter@fhwn.ac.at","is_corresponding":true,"name":"Michael Rauter"},{"affiliations":["Medical University of Vienna, Vienna, Austria"],"email":"lukas.a.zimmermann@meduniwien.ac.at","is_corresponding":false,"name":"Lukas Zimmermann PhD"},{"affiliations":["University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria"],"email":"markus.zeilinger@fhwn.ac.at","is_corresponding":false,"name":"Markus Zeilinger PhD"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Rauter"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1054","time_end":"","time_stamp":"","time_start":"","title":"Accelerating Transfer Function Update for Distance Map based Volume Rendering","uid":"v-short-1054","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1056":{"abstract":"We present FCNR, a fast compressive neural representation for tens of thousands of visualization images under varying viewpoints and timesteps. The existing NeRVI solution, albeit enjoying a high compression rate, incurs slow speeds in encoding and decoding. Built on the recent advances in stereo image compression, FCNR assimilates stereo context modules and joint context transfer modules to compress image pairs. Our solution significantly improves encoding and decoding speed while maintaining high reconstruction quality and satisfying compression rate. To demonstrate its effectiveness, we compare FCNR with state-of-the-art neural compression methods, including E-NeRV, HNeRV, NeRVI, and ECSIC.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"ylu25@nd.edu","is_corresponding":true,"name":"Yunfei Lu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"pgu@nd.edu","is_corresponding":false,"name":"Pengfei Gu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yunfei Lu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1056","time_end":"","time_stamp":"","time_start":"","title":"FCNR: Fast Compressive Neural Representation of Visualization Images","uid":"v-short-1056","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1057":{"abstract":"Real-world datasets often consist of quantitative and categorical variables. The analyst needs to focus on either kind separately or both jointly. We proposed a visualization technique tackling these challenges that supports visual cluster and set analysis. In this paper, we investigate how its visualization parameters affect the accuracy and speed of cluster and set analysis tasks in a controlled experiment. Our findings show that, with the proper settings, our visualization can support both task types well. However, we did not find settings suitable for the joint task, which provides opportunities for future research.","accessible_pdf":false,"authors":[{"affiliations":["TU Wien, Vienna, Austria"],"email":"nikolaus.piccolotto@tuwien.ac.at","is_corresponding":true,"name":"Nikolaus Piccolotto"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"mwallinger@ac.tuwien.ac.at","is_corresponding":false,"name":"Markus Wallinger"},{"affiliations":["Institute of Visual Computing and Human-Centered Technology, Vienna, Austria"],"email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"markus.boegl@tuwien.ac.at","is_corresponding":false,"name":"Markus B\u00f6gl"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nikolaus Piccolotto"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1057","time_end":"","time_stamp":"","time_start":"","title":"On Combined Visual Cluster and Set Analysis","uid":"v-short-1057","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1058":{"abstract":"Semantic interaction (SI) in Dimension Reduction (DR) of images allows users to incorporate feedback through direct manipulation of the 2D positions of images. Through interaction, users specify a set of pairwise relationships that the DR should aim to capture. Existing methods for images incorporate feedback into the DR through feature weights on abstract embedding features. However, if the original embedding features do not suitably capture the users task then the DR cannot either. We propose, ImageSI, an SI method for image DR that incorporates user feedback directly into the image model to update the underlying embeddings, rather than weighting them. In doing so, ImageSI ensures that the embeddings suitably capture the features necessary for the task so that the DR can subsequently organize images using those features. We present two variations of ImageSI using different loss functions - ImageSI_MDS-Inverse , which prioritizes the explicit pairwise relationships from the interaction and ImageSI_Triplet, which prioritizes clustering, using the interaction to define groups of images. Finally, we present a usage scenario and a simulation-based evaluation to demonstrate the utility of ImageSI and compare it to current methods.","accessible_pdf":false,"authors":[{"affiliations":["Vriginia Tech, Blacksburg, United States"],"email":"jiayuelin@vt.edu","is_corresponding":false,"name":"Jiayue Lin"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":true,"name":"Rebecca Faust"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Rebecca Faust"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1058","time_end":"","time_stamp":"","time_start":"","title":"ImageSI: Semantic Interaction for Deep Learning Image Projections","uid":"v-short-1058","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1059":{"abstract":"Gantt charts are a widely-used idiom for visualizing temporal discrete event sequence data where dependencies exist between events. They are popular in domains such as manufacturing and computing for their intuitive layout of such data. However, these domains frequently generate data at scales which tax both the visual representation and the ability to render it at interactive speeds. To aid visualization developers who use Gantt charts in these situations, we develop a task taxonomy of low level visualization tasks supported by Gantt charts and connect them to the data queries needed to support them. Our taxonomy is derived through a systematic literature survey of visualizations using Gantt charts over the past 30 years.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"sayefsakin@sci.utah.edu","is_corresponding":true,"name":"Sayef Azad Sakin"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sayef Azad Sakin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1059","time_end":"","time_stamp":"","time_start":"","title":"A Literature-based Visualization Task Taxonomy for Gantt charts","uid":"v-short-1059","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1062":{"abstract":"Annotations are a critical component of visualizations, helping viewers interpret the visual representation and highlighting critical data insights. Despite its significant role, we lack an understanding of how annotations can be incorporated into other data representations, such as physicalizations and sonifications. Given the emergent nature of these representations, sonifications and physicalizations lack formalized conventions (e.g., design space, vocabulary) that can introduce challenges for audiences to interpret the intended data encoding. To address this challenge, this work focuses on how annotations can be more tightly integrated into the design process of creating sonifications and physicalization. In an exploratory study with 13 designers, we explore how visualization annotation techniques can be adapted to sonic and physical modalities. Our work highlights how annotations for sonification and physicalizations are inseparable from their data encodings","accessible_pdf":false,"authors":[{"affiliations":["Whitman College, Walla Walla, United States"],"email":"sorensor@whitman.edu","is_corresponding":false,"name":"Rhys Sorenson-Graff"},{"affiliations":["University of Colorado Boulder, Boulder, United States"],"email":"sandra.bae@colorado.edu","is_corresponding":true,"name":"S. Sandra Bae"},{"affiliations":["Whitman College, Walla Walla, United States"],"email":"wirfsbro@colorado.edu","is_corresponding":false,"name":"Jordan Wirfs-Brock"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["S. Sandra Bae"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1062","time_end":"","time_stamp":"","time_start":"","title":"Integrating Annotations into the Design Process for Sonifications and Physicalizations","uid":"v-short-1062","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1064":{"abstract":"Large Language Models (LLMs) have demonstrated remarkable versatility in visualization authoring, but often generate suboptimal designs that are invalid or fail to adhere to design guidelines for effective visualization. We present Bavisitter, a natural language interface that integrates established visualization design guidelines into LLMs. Based on our survey on the design issues in LLM-generated visualizations, Bavisitter monitors the generated visualizations during a visualization authoring dialogue to detect an issue. When an issue is detected, it intervenes in the dialogue, suggesting possible solutions to the issue by modifying the prompts. We also demonstrate two use cases where Bavisitter detects and resolves design issues from the actual LLM-generated visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jiwnchoi@skku.edu","is_corresponding":true,"name":"Jiwon Choi"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"dlwodnd00@skku.edu","is_corresponding":false,"name":"Jaeung Lee"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jmjo@skku.edu","is_corresponding":false,"name":"Jaemin Jo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jiwon Choi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1064","time_end":"","time_stamp":"","time_start":"","title":"Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring","uid":"v-short-1064","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1065":{"abstract":"Although many dimensionality reduction (DR) techniques employ stochastic methods for computational efficiency, such as negative sampling or stochastic gradient descent, their impact on the projection has been underexplored. In this work, we investigate how such stochasticity affects the stability of projections and present a novel DR technique, GhostUMAP, to measure the pointwise instability of projections. Our idea is to introduce clones of data points, \"ghosts\", into UMAP's layout optimization process. Ghosts are designed to be completely passive: they do not affect any others but are influenced by attractive and repulsive forces from the original data points. After a single optimization run, GhostUMAP can capture the projection instability of data points by measuring the variance with the projected positions of their ghosts. We also present a successive halving technique to reduce the computation of GhostUMAP. Our results suggest that GhostUMAP can reveal unstable data points with a reasonable computational overhead.","accessible_pdf":false,"authors":[{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"mw.jung@skku.edu","is_corresponding":true,"name":"Myeongwon Jung"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"takanori.fujiwara@liu.se","is_corresponding":false,"name":"Takanori Fujiwara"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jmjo@skku.edu","is_corresponding":false,"name":"Jaemin Jo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Myeongwon Jung"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1065","time_end":"","time_stamp":"","time_start":"","title":"GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction","uid":"v-short-1065","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1068":{"abstract":"Integrating textual content, such as titles, annotations, and captions, with visualizations facilitates comprehension and takeaways during data exploration. Yet current tools often lack mechanisms for integrating meaningful text with visual data. This paper introduces DASH, a bimodal data exploration tool that supports integrating semantic levels into the interactive process of visualization and text-based analysis. DASH operationalizes a modified version of Lundgard et al.'s semantic hierarchy model that categorizes data descriptions into four levels ranging from basic encodings to high-level insights. By leveraging this structured semantic level framework and a large language model's text generation capabilities, DASH enables the creation of data-driven narratives via drag-and-drop user interaction. Through a preliminary user evaluation, we discuss the utility of DASH's text and chart integration capabilities when participants perform data exploration with the tool. Based on the study's feedback and observations, we discuss implications for designing unified text and chart authoring tools.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"bromley.denny@gmail.com","is_corresponding":true,"name":"Dennis Bromley"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dennis Bromley"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1068","time_end":"","time_stamp":"","time_start":"","title":"DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations","uid":"v-short-1068","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1072":{"abstract":"Recent advancements in vision models have significantly enhanced their ability to perform complex chart understanding tasks, such as chart captioning and chart question answering. However, assessing how these models process charts remains challenging. Existing benchmarks only coarsely evaluate how well the model performs the given task without thoroughly evaluating the underlying mechanisms that drive performance, such as how models extract image embeddings. This gap limits our understanding of the model's perceptual capabilities regarding fundamental graphical components. Therefore, we introduce a novel evaluation framework designed to assess the graphical perception of image embedding models. In the context of chart comprehension, we examine two main aspects of channel effectiveness: accuracy and discriminability of various visual channels. We first assess channel accuracy through the linearity of embeddings, which is the degree to which the perceived magnitude is proportional to the size of the stimulus. % based on the assumption that perceived magnitude should be proportional to the size of Conversely, distances between embeddings serve as a measure of discriminability; embeddings that are far apart can be considered discriminable. Our experiments on a general image embedding model, CLIP, provided that it perceives channel accuracy differently from humans and demonstrated distinct discriminability in specific channels such as length, tilt, and curvature. We aim to extend our work as a more general benchmark for reliable visual encoders and enhance a model for two distinctive goals for future applications: precise chart comprehension and mimicking human perception.","accessible_pdf":false,"authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"dtngus0111@gmail.com","is_corresponding":true,"name":"Soohyun Lee"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jangsus1@snu.ac.kr","is_corresponding":false,"name":"Minsuk Chang"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"shpark@hcil.snu.ac.kr","is_corresponding":false,"name":"Seokhyeon Park"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Soohyun Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1072","time_end":"","time_stamp":"","time_start":"","title":"Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness","uid":"v-short-1072","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1078":{"abstract":"Data visualizations are reaching global audiences. As people who use Right-to-left (RTL) scripts constitute over a billion potential data visualization users, a need emerges to investigate how visualizations are communicated to them. Web design guidelines exist to assist designers in adapting different reading directions, yet we lack a similar standard for visualization design. This paper investigates the design patterns of visualizations with RTL scripts. We collected 128 visualizations from data-driven articles published in Arabic news outlets and analyzed their chart composition, textual elements, and sources. Our analysis suggests that designers tend to apply RTL approaches more frequently for categorical data. In other situations, we observed a mix of Left-to-right (LTR) and RTL approaches for chart directions and structures, sometimes inconsistently utilized within the same article. We reflect on this lack of clear guidelines for RTL data visualizations and derive implications for visualization authoring tools and future research directions.","accessible_pdf":false,"authors":[{"affiliations":["University College London, London, United Kingdom","UAE University , Al Ain, United Arab Emirates"],"email":"muna.alebri.19@ucl.ac.uk","is_corresponding":true,"name":"Muna Alebri"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ntrakotondravony@wpi.edu","is_corresponding":false,"name":"No\u00eblle Rakotondravony"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ltharrison@wpi.edu","is_corresponding":false,"name":"Lane Harrison"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Muna Alebri"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1078","time_end":"","time_stamp":"","time_start":"","title":"Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content","uid":"v-short-1078","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1079":{"abstract":"Image datasets serve as the foundation for machine learning models in computer vision, significantly influencing model capabilities, performance, and biases alongside architectural considerations. Therefore, understanding the composition and distribution of these datasets has become increasingly crucial. To address the need for intuitive exploration of these datasets, we propose AEye, an extensible and scalable visualization tool tailored to image datasets. AEye utilizes a contrastively trained model to embed images into semantically meaningful high-dimensional representations, facilitating data clustering and organization. To visualize the high-dimensional representations, we project them onto a two-dimensional plane and arrange images in layers so users can seamlessly navigate and explore them interactively. Furthermore, AEye facilitates semantic search functionalities for both text and image queries, enabling users to search for content. We open-source the codebase for AEye, and provide a simple configuration to add additional datasets.","accessible_pdf":false,"authors":[{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"fgroetschla@ethz.ch","is_corresponding":false,"name":"Florian Gr\u00f6tschla"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"lanzendoerfer@ethz.ch","is_corresponding":false,"name":"Luca A Lanzend\u00f6rfer"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"mcalzavara@student.ethz.ch","is_corresponding":false,"name":"Marco Calzavara"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"wattenhofer@ethz.ch","is_corresponding":false,"name":"Roger Wattenhofer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Florian Gr\u221a\u2202tschla"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1079","time_end":"","time_stamp":"","time_start":"","title":"AEye: A Visualization Tool for Image Datasets","uid":"v-short-1079","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1081":{"abstract":"Sine illusion happens when the more quickly changing pairs of lines lead to bigger underestimates of the delta between them. We evaluate three visual manipulations on mitigating sine illusions: dotted lines, aligned gridlines, and offset gridlines via a user study. We asked participants to compare the deltas between two lines at two time points and found aligned gridlines to be the most effective in mitigating sine illusions. Using data from the user study, we produced a model that predicts the impact of the sine illusion in line charts by accounting for the ratio of the vertical distance between the two points of comparison. When the ratio is less than 50\\%, participants begin to be influenced by the sine illusion. This effect can be significantly exacerbated when the difference between the two deltas falls under 30\\%. We compared two explanations for the sine illusion based on our data: either participants were mistakenly using the perpendicular distance between the two lines to make their comparison (the perpendicular explanation), or they incorrectly relied on the length of the line segment perpendicular to the angle bisector of the bottom and top lines (the equal triangle explanation). We found the equal triangle explanation to be the more predictive model explaining participant behaviors.","accessible_pdf":false,"authors":[{"affiliations":["Google LLC, San Francisco, United States"],"email":"cknit1999@gmail.com","is_corresponding":false,"name":"Clayton J Knittel"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"jawuah3@gatech.edu","is_corresponding":false,"name":"Jane Awuah"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"franconeri@northwestern.edu","is_corresponding":false,"name":"Steven L Franconeri"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"cxiong@gatech.edu","is_corresponding":true,"name":"Cindy Xiong Bearfield"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Cindy Xiong Bearfield"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1081","time_end":"","time_stamp":"","time_start":"","title":"Gridlines Mitigate Sine Illusion in Line Charts","uid":"v-short-1081","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1089":{"abstract":"In healthcare, AI techniques are widely used for tasks like risk assessment and anomaly detection. Despite AI's potential as a valuable assistant, its role in complex medical data analysis often oversimplifies human-AI collaboration dynamics. To address this, we collaborated with a local hospital, engaging six physicians and one data scientist in a formative study. From this collaboration, we propose a framework integrating two-phase interactive visualization systems: one for Human-Led, AI-Assisted Retrospective Analysis and another for AI-Mediated, Human-Reviewed Iterative Modeling. This framework aims to enhance understanding and discussion around effective human-AI collaboration in healthcare.","accessible_pdf":false,"authors":[{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"ouyy@shanghaitech.edu.cn","is_corresponding":true,"name":"Yang Ouyang"},{"affiliations":["University of Illinois at Urbana-Champaign, Champaign, United States","University of Illinois at Urbana-Champaign, Champaign, United States"],"email":"zhang414@illinois.edu","is_corresponding":false,"name":"Chenyang Zhang"},{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"wanghe1@shanghaitech.edu.cn","is_corresponding":false,"name":"He Wang"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"15301050137@fudan.edu.cn","is_corresponding":false,"name":"Tianle Ma"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"cjiang_fdu@yeah.net","is_corresponding":false,"name":"Chang Jiang"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"522649732@qq.com","is_corresponding":false,"name":"Yuheng Yan"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"yan.zuoqin@zs-hospital.sh.cn","is_corresponding":false,"name":"Zuoqin Yan"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong","Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"mxj@cse.ust.hk","is_corresponding":false,"name":"Xiaojuan Ma"},{"affiliations":["Southeast University, Nanjing, China","Southeast University, Nanjing, China"],"email":"cshiag@connect.ust.hk","is_corresponding":false,"name":"Chuhan Shi"},{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yang Ouyang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1089","time_end":"","time_stamp":"","time_start":"","title":"A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling","uid":"v-short-1089","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1090":{"abstract":"Visualizing high dimensional data is challenging, since any dimensionality reduction technique will distort distances. A classic method in cartography\u2013Tissot\u2019s Indicatrix, specific to sphere-to-plane maps\u2013visualizes distortion using ellipses. Inspired by this idea, we describe the hypertrix: a method for representing distortions that occur when data is projected from arbitrarily high dimensions onto a 2D plane. We demonstrate our technique through synthetic and real-world datasets, and describe how this indicatrix can guide interpretations of nonlinear dimensionality reduction","accessible_pdf":false,"authors":[{"affiliations":["Harvard University, Boston, United States"],"email":"sraval@g.harvard.edu","is_corresponding":true,"name":"Shivam Raval"},{"affiliations":["Harvard University, Cambridge, United States","Google Research, Cambridge, United States"],"email":"viegas@google.com","is_corresponding":false,"name":"Fernanda Viegas"},{"affiliations":["Harvard University, Cambridge, United States","Google Research, Cambridge, United States"],"email":"wattenberg@gmail.com","is_corresponding":false,"name":"Martin Wattenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shivam Raval"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1090","time_end":"","time_stamp":"","time_start":"","title":"Hypertrix: An indicatrix for high-dimensional visualizations","uid":"v-short-1090","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1096":{"abstract":"Coordinated multiple views (CMV) in a visual analytics system can help users explore multiple data representations simultaneously with linked interactions. However, the implementation of coordinated multiple views can be challenging. Without standard software libraries, visualization designers need to re-implement CMV during the development of each system. We introduce use-coordination, a grammar and software library that supports the efficient implementation of CMV. The grammar defines a JSON-based representation for an abstract coordination model from the information visualization literature. We contribute an optional extension to the model and grammar that allows for hierarchical coordination. Through three use cases, we show that use-coordination enables implementation of CMV in systems containing not only basic statistical charts but also more complex visualizations such as medical imaging volumes. We describe six software extensions, including a graphical editor for manipulation of coordination, which showcase the potential to build upon our coordination-focused declarative approach.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"mark_keller@hms.harvard.edu","is_corresponding":true,"name":"Mark S Keller"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"trevor_manz@g.harvard.edu","is_corresponding":false,"name":"Trevor Manz"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mark S Keller"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1096","time_end":"","time_stamp":"","time_start":"","title":"Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views","uid":"v-short-1096","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1097":{"abstract":"Visualization tools now commonly present automated insights highlighting salient data patterns, including correlations, distributions, outliers, and differences, among others. While these insights are valuable for data exploration and chart interpretation, users currently only have a binary choice of accepting or rejecting them, lacking the flexibility to refine the system logic or customize the insight generation process. To address this limitation, we present GROOT, a prototype system that allows users to proactively specify and refine automated data insights. The system allows users to directly manipulate chart elements to receive insight recommendations based on their selections. Additionally, GROOT provides users with a manual editing interface to customize, reconfigure, or add new insights to individual charts and propagate them to future explorations. We describe a usage scenario to illustrate how these features collectively support insight editing and configuration, and discuss opportunities for future work including incorporating LLMs, improving semantic data and visualization search, and supporting insight management.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, College Park, United States","Tableau Research, Seattle, United States"],"email":"sgathani@cs.umd.edu","is_corresponding":true,"name":"Sneha Gathani"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"amcrisan@uwaterloo.ca","is_corresponding":false,"name":"Anamaria Crisan"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"arjun.srinivasan.10@gmail.com","is_corresponding":false,"name":"Arjun Srinivasan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sneha Gathani"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1097","time_end":"","time_stamp":"","time_start":"","title":"Groot: An Interface for Editing and Configuring Automated Data Insights","uid":"v-short-1097","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1100":{"abstract":"Confidence scores of automatic speech recognition (ASR) outputs are often inadequately communicated, preventing its seamless integration into analytical workflows. In this paper, we introduce ConFides, a visual analytic system developed in collaboration with intelligence analysts to address this issue. ConFides aims to aid exploration and post-AI-transcription editing by visually representing the confidence associated with the transcription. We demonstrate how our tool can assist intelligence analysts who use ASR outputs in their analytical and exploratory tasks and how it can help mitigate misinterpretation of crucial information. We also discuss opportunities for improving textual data cleaning and model transparency for human-machine collaboration.","accessible_pdf":false,"authors":[{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"sha@wustl.edu","is_corresponding":true,"name":"Sunwoo Ha"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"chaelim@wustl.edu","is_corresponding":false,"name":"Chaehun Lim"},{"affiliations":["Smith College, Northampton, United States"],"email":"jcrouser@smith.edu","is_corresponding":false,"name":"R. Jordan Crouser"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"alvitta@wustl.edu","is_corresponding":false,"name":"Alvitta Ottley"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sunwoo Ha"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1100","time_end":"","time_stamp":"","time_start":"","title":"ConFides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration","uid":"v-short-1100","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1101":{"abstract":"Color coding, a technique assigning specific colors to different information types, has proven advantages in aiding human cognitive activities, especially reading and comprehension. The rise of Large Language Models (LLMs) has streamlined document coding, enabling simple automatic text labeling with various schemes. This has the potential to make color-coding more accessible and benefit more users. However, the importance of color choice, particularly in aiding textual information seeking through various color schemes, is not well studied. This paper presents a user study assessing the effectiveness of various color schemes generated by different base colors for readers' information-seeking performance in text documents color-coded by LLMs. Participants performed information-seeking tasks within scholarly papers' abstracts, each coded with a different scheme under time constraints. Results showed that non-analogous color schemes lead to better information-seeking performance, in both accuracy and response time. Yellow-inclusive color schemes lead to shorter response times and are also preferred by most participants. These could inform the better choice of color scheme for annotating text documents. As LLMs advance document coding, we advocate for more research focusing on the \"color\" aspect of color-coding techniques.","accessible_pdf":false,"authors":[{"affiliations":["Pennsylvania State University, University Park, United States"],"email":"samnghoyin@gmail.com","is_corresponding":true,"name":"Ho Yin Ng"},{"affiliations":["Pennsylvania State University, University Park, United States"],"email":"zmh5268@psu.edu","is_corresponding":false,"name":"Zeyu He"},{"affiliations":["Pennsylvania State University, University Park , United States"],"email":"txh710@psu.edu","is_corresponding":false,"name":"Ting-Hao Kenneth Huang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ho Yin Ng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1101","time_end":"","time_stamp":"","time_start":"","title":"What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?","uid":"v-short-1101","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1109":{"abstract":"Homophily refers to the tendency of individuals to associate with others who are similar to them in characteristics, such as, race, ethnicity, age, gender, or interests. In this paper, we investigate if individuals exhibit racial homophily when viewing visualizations, using mass shooting data in the United States as the example topic. We conducted a crowdsourced experiment (N=450) where each participant was shown a visualization displaying the counts of mass shooting victims, highlighting the counts for one of three racial groups (White, Black, or Hispanic). Participants were assigned to view visualizations highlighting their own race or a different race to assess the influence of racial concordance on changes in affect (emotion) and attitude towards gun control. While we did not find evidence of homophily, the results showed a significant negative shift in affect across all visualization conditions. Notably, political ideology significantly impacted changes in affect, with more liberal views correlating with a more negative affect change. Our findings underscore the complexity of reactions to mass shooting visualizations and highlight the need for additional measures for understanding homophily in visualizations.","accessible_pdf":false,"authors":[{"affiliations":["New York University, Brooklyn, United States"],"email":"pt2393@nyu.edu","is_corresponding":true,"name":"Poorna Talkad Sukumar"},{"affiliations":["New York University, Brooklyn, United States"],"email":"mporfiri@nyu.edu","is_corresponding":false,"name":"Maurizio Porfiri"},{"affiliations":["New York University, New York, United States"],"email":"onov@nyu.edu","is_corresponding":false,"name":"Oded Nov"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Poorna Talkad Sukumar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1109","time_end":"","time_stamp":"","time_start":"","title":"Connections Beyond Data: Exploring Homophily With Visualizations","uid":"v-short-1109","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1114":{"abstract":"As visualization literacy and its implications gain prominence, we need effective methods to teach and prepare students for the variety of visualizations they might encounter in an increasingly data-driven world. Recently, the potential of comics has been recognized in various data visualization contexts, including educational settings. In this paper, we describe the development of a workshop in which we use our \u201ccomic construction kit\u201d as a tool for students to understand various data visualization techniques through an interactive creative approach of creating explanatory comics. We report on our insights and learnings from holding eight workshops with high school students, high school teachers, university students, and university lecturers, aiming to enhance the landscape of hands-on visualization activities that can enrich the visualization classroom. The comic construction kit and all supplemental materials are open source under a CC-BY license and available at https://fhstp.github.io/comixplain/vis4schools.html.","accessible_pdf":false,"authors":[{"affiliations":["St. P\u00f6lten University of Applied Sciences, St. P\u00f6lten, Austria"],"email":"magdalena.boucher@fhstp.ac.at","is_corresponding":true,"name":"Magdalena Boucher"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"christina.stoiber@fhstp.ac.at","is_corresponding":false,"name":"Christina Stoiber"},{"affiliations":["School of Informatics, Communications and Media, Hagenberg im M\u00fchlkreis, Austria"],"email":"mandy.keck@fh-hagenberg.at","is_corresponding":false,"name":"Mandy Keck"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"victor.oliveira@fhstp.ac.at","is_corresponding":false,"name":"Victor Adriel de Jesus Oliveira"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"wolfgang.aigner@fhstp.ac.at","is_corresponding":false,"name":"Wolfgang Aigner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Magdalena Boucher"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1114","time_end":"","time_stamp":"","time_start":"","title":"The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations","uid":"v-short-1114","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1116":{"abstract":"Visualizations support rapid analysis of scientific datasets, allowing viewers to glean aggregate information (e.g., the mean) within split-seconds. While prior research has explored this ability in conventional charts, it is unclear if spatial visualizations used by computational scientists afford a similar ensemble perception capacity. We investigate people's ability to estimate two summary statistics, mean and variance, from pseudocolor scalar fields. In a crowdsourced experiment, we find that participants can reliably characterize both statistics, although variance discrimination requires a much stronger signal. Multi-hue and diverging colormaps outperformed monochromatic, luminance ramps in aiding this extraction. Analysis of qualitative responses suggests that participants often estimate the distribution of hotspots and valleys as visual proxies for data statistics. These findings suggest that people's summary interpretation of spatial datasets is likely driven by the appearance of discrete color segments, rather than assessments of overall luminance. Implicit color segmentation in quantitative displays could thus prove more useful than previously assumed by facilitating quick, gist-level judgments about color-coded visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"vmateevitsi@anl.gov","is_corresponding":false,"name":"Victor A. Mateevitsi"},{"affiliations":["Argonne National Laboratory, Lemont, United States","University of Illinois Chicago, Chicago, United States"],"email":"papka@anl.gov","is_corresponding":false,"name":"Michael E. Papka"},{"affiliations":["Indiana University, Indianapolis, United States"],"email":"redak@iu.edu","is_corresponding":true,"name":"Khairi Reda"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Khairi Reda"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1116","time_end":"","time_stamp":"","time_start":"","title":"Science in a Blink: Supporting Ensemble Perception in Scalar Fields","uid":"v-short-1116","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1117":{"abstract":"Geovisualizations are powerful tools for exploratory spatial analysis, enabling sighted users to discern patterns, trends, and relationships within geographic data. However, these visual tools have remained largely inaccessible to screen-reader users. We present AltGeoViz, a new system we designed to facilitate geovisualization exploration for these users. AltGeoViz dynamically generates alt-text descriptions based on the user's current map view, providing summaries of spatial patterns and descriptive statistics. In a study of five screen-reader users, we found that AltGeoViz enabled them to interact with geovisualizations in previously infeasible ways. Participants demonstrated a clear understanding of data summaries and their location context, and they could synthesize spatial understandings of their explorations. Moreover, we identified key areas for improvement, such as the addition of intuitive spatial navigation controls and comparative analysis features.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"chuchuli@cs.washington.edu","is_corresponding":true,"name":"Chu Li"},{"affiliations":["University of Washington, Seattle, United States"],"email":"ypang2@cs.washington.edu","is_corresponding":false,"name":"Rock Yuren Pang"},{"affiliations":["University of Washington, Seattle, United States"],"email":"asharif@cs.washington.edu","is_corresponding":false,"name":"Ather Sharif"},{"affiliations":["University of Washington, Seattle, United States"],"email":"chheda@cs.washington.edu","is_corresponding":false,"name":"Arnavi Chheda-Kothary"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jonf@cs.uw.edu","is_corresponding":false,"name":"Jon E. Froehlich"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chu Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1117","time_end":"","time_stamp":"","time_start":"","title":"AltGeoViz: Facilitating Accessible Geovisualization","uid":"v-short-1117","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1119":{"abstract":"Analyzing uncertainty in spatial data is a vital task in many domains, as for example with climate and weather simulation ensembles. Although there are many methods to support the analysis of the uncertainty, such as uncertain isocontours or calculation of statistical values, it is still a challenge to get an overview of the uncertainty and then decide a further method or parameter to analyze the data, or investigate further some region or point of interest. We present cumulative height fields, a visualization method for 2D scalar field ensembles using the marginal empirical distribution function and show preliminary results using volume rendering and slicing for the Max Planck Institute Grand Ensemble.","accessible_pdf":false,"authors":[{"affiliations":["Institute of Computer Science, Leipzig University, Leipzig, Germany"],"email":"daetz@informatik.uni-leipzig.de","is_corresponding":true,"name":"Tomas Rodolfo Daetz Chacon"},{"affiliations":["German Climate Computing Center (DKRZ), Hamburg, Germany"],"email":"boettinger@dkrz.de","is_corresponding":false,"name":"Michael B\u00f6ttinger"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"scheuermann@informatik.uni-leipzig.de","is_corresponding":false,"name":"Gerik Scheuermann"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"heine@informatik.uni-leipzig.de","is_corresponding":false,"name":"Christian Heine"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tomas Rodolfo Daetz Chacon"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1119","time_end":"","time_stamp":"","time_start":"","title":"Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function","uid":"v-short-1119","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1121":{"abstract":"Many real-world networks contain structurally-equivalent nodes. These are defined as vertices that share the same set of neighboring nodes, making them interchangeable with a traditional graph layout approach. However, many real-world graphs also have properties associated with nodes, adding additional meaning to them. We present an approach for swapping locations of structurally-equivalent nodes in graph layout so that those with more similar properties have closer proximity to each other. This improves the usefulness of the visualization from an attribute perspective without negatively impacting the visualization from a structural perspective. We include an algorithm for finding these sets of nodes in linear time, as well as methodologies for ordering nodes based on their attribute similarity, which works for scalar, ordinal, multidimensional, and categorical data.","accessible_pdf":false,"authors":[{"affiliations":["Pacific Northwest National Lab, Richland, United States"],"email":"patrick.mackey@pnnl.gov","is_corresponding":true,"name":"Patrick Mackey"},{"affiliations":["University of Arizona, Tucson, United States","Pacific Northwest National Laboratory, Richland, United States"],"email":"jacobmiller1@arizona.edu","is_corresponding":false,"name":"Jacob Miller"},{"affiliations":["Pacific Northwest National Laboratory, Richland, United States"],"email":"liz.f@pnnl.gov","is_corresponding":false,"name":"Liz Faultersack"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Patrick Mackey"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1121","time_end":"","time_stamp":"","time_start":"","title":"Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes","uid":"v-short-1121","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1126":{"abstract":"Psychological research often involves understanding psychological constructs through conducting factor analysis on data collected by a questionnaire, which can comprise hundreds of questions. Without interactive systems for interpreting factor models, researchers are frequently exposed to subjectivity, potentially leading to misinterpretations or overlooked crucial information. This paper introduces FAVis, a novel interactive visualization tool designed to aid researchers in interpreting and evaluating factor analysis results. FAVis enhances the understanding of relationships between variables and factors by supporting multiple views for visualizing factor loadings and correlations, allowing users to analyze information from various perspectives. The primary feature of FAVis is to enable users to set optimal thresholds for factor loadings to balance clarity and information retention. FAVis also allows users to assign tags to variables, enhancing the understanding of factors by linking them to their associated psychological constructs. We conduct a case study on a dataset from the Motivational State Questionnaire, utilizing a three-factor common factor model. Our user study demonstrates the utility of FAVis in various tasks.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States","University of Notre Dame, Notre Dame, United States"],"email":"ylu22@nd.edu","is_corresponding":true,"name":"Yikai Lu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yikai Lu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1126","time_end":"","time_stamp":"","time_start":"","title":"FAVis: Visual Analytics of Factor Analysis for Psychological Research","uid":"v-short-1126","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1127":{"abstract":"In this paper, we analyze the Apple Vision Pro hardware and the visionOS software platform, assessing their capabilities for volume rendering of structured grids, a prevalent technique across various applications. The Apple Vision Pro supports multiple display modes, from classical augmented reality (AR) using video see-through technology to immersive virtual reality (VR) environments that exclusively render virtual objects. These modes utilize different APIs and exhibit distinct capabilities. Our focus is on direct volume rendering, selected for its implementation challenges due to the native graphics APIs being predominantly oriented towards surface shading. Volume rendering is particularly vital in fields where AR and VR visualizations offer substantial benefits, such as in medicine and manufacturing. Despite its initial high cost, we anticipate that the Vision Pro will become more accessible and affordable over time, following Apple's track record of market expansion. As these devices become more prevalent, understanding how to effectively program and utilize them becomes increasingly important, offering significant opportunities for innovation and practical applications in various sectors.","accessible_pdf":false,"authors":[{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"camilla.hrycak@uni-due.de","is_corresponding":true,"name":"Camilla Hrycak"},{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"david.lewakis@stud.uni-due.de","is_corresponding":false,"name":"David Lewakis"},{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"jens.krueger@uni-due.de","is_corresponding":false,"name":"Jens Harald Krueger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Camilla Hrycak"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1127","time_end":"","time_stamp":"","time_start":"","title":"Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization","uid":"v-short-1127","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1130":{"abstract":"Visualization, from simple line plots to complex high-dimensional visual analysis systems, has established itself throughout numerous domains to explore, analyze, and evaluate data. Applying such visualizations in the context of simulation science where High-Performance Computing (HPC) produces ever-growing amounts of data that is more complex, potentially multidimensional, and multi-modal, takes up resources and a high level of technological experience often not available to domain experts. In this work, we present DaVE - a curated database of visualization examples, which aims to provide state-of-the-art and advanced visualization methods that arise in the context of HPC applications. Based on domain- or data-specific descriptors entered by the user, DaVE provides a list of appropriate visualization techniques, each accompanied by descriptions, examples, references, and resources. Sample code, adaptable container templates, and recipes for easy integration in HPC applications can be downloaded for easy access to high-fidelity visualizations. While the database is currently filled with a limited number of entries based on a broad evaluation of needs and challenges of current HPC users, DaVE is designed to be easily extended by experts from both the visualization and HPC communities.","accessible_pdf":false,"authors":[{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"koenen@informatik.rwth-aachen.de","is_corresponding":true,"name":"Jens Koenen"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"m.petersen@rptu.de","is_corresponding":false,"name":"Marvin Petersen"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"garth@rptu.de","is_corresponding":false,"name":"Christoph Garth"},{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"gerrits@vis.rwth-aachen.de","is_corresponding":false,"name":"Tim Gerrits"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jens Koenen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1130","time_end":"","time_stamp":"","time_start":"","title":"DaVE - A Curated Database of Visualization Examples","uid":"v-short-1130","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1135":{"abstract":"Humans struggle to perceive and interpret high-dimensional data. Therefore, high-dimensional data are often projected into two dimensions for visualization. Many applications benefit from complex nonlinear dimensionality reduction techniques, but the effects of individual high-dimensional features are hard to explain in the two-dimensional space. Most visualization solutions use multiple two-dimensional plots, each showing the effect of one high-dimensional feature in two dimensions; this approach creates a need for a visual inspection of k plots for a k-dimensional input space. Our solution, Feature Clock, provides a novel approach that eliminates the need to inspect these k plots to grasp the influence of original features on the data structure depicted in two dimensions. Feature Clock enhances the explainability and compactness of visualizations of embedded data and is available in an open-source Python library.","accessible_pdf":false,"authors":[{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"ovcharenko.folga@gmail.com","is_corresponding":true,"name":"Olga Ovcharenko"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"rita.sevastjanova@uni-konstanz.de","is_corresponding":false,"name":"Rita Sevastjanova"},{"affiliations":["ETH Zurich, Z\u00fcrich, Switzerland"],"email":"valentina.boeva@inf.ethz.ch","is_corresponding":false,"name":"Valentina Boeva"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Olga Ovcharenko"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1135","time_end":"","time_stamp":"","time_start":"","title":"Feature Clock: High-Dimensional Effects in Two-Dimensional Plots","uid":"v-short-1135","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1144":{"abstract":"Reconstruction of 3D scenes from 2D images is a technical challenge that impacts domains from Earth and planetary sciences and space exploration to augmented and virtual reality. Typically, reconstruction algorithms first identify common features across images and then minimize reconstruction errors after estimating the shape of the terrain. This bundle adjustment (BA) step optimizes around a single, simplifying scalar value that obfuscates many possible causes of reconstruction errors (e.g., initial estimate of the position and orientation of the camera, lighting conditions, ease of feature detection in the terrain). Reconstruction errors can lead to inaccurate scientific inferences or endanger a spacecraft exploring a remote environment. To address this challenge, we present VECTOR, a visual analysis tool that improves error inspection for stereo reconstruction BA. VECTOR provides analysts with previously unavailable visibility into feature locations, camera pose, and computed 3D points. VECTOR was developed in partnership with the Perseverance Mars Rover and Ingenuity Mars Helicopter terrain reconstruction team at the NASA Jet Propulsion Laboratory. We report on how this tool was used to debug and improve terrain reconstruction for the Mars 2020 mission.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"racquel.fygenson@gmail.com","is_corresponding":false,"name":"Racquel Fygenson"},{"affiliations":["Weta FX, Auckland, New Zealand"],"email":"kjawad@andrew.cmu.edu","is_corresponding":false,"name":"Kazi Jawad"},{"affiliations":["Art Center, Pasadena, United States"],"email":"zongzhanisabelli@gmail.com","is_corresponding":false,"name":"Zongzhan Li"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"francois.ayoub@jpl.nasa.gov","is_corresponding":false,"name":"Francois Ayoub"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"bob.deen@jpl.nasa.gov","is_corresponding":false,"name":"Robert G Deen"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"sd@scottdavidoff.com","is_corresponding":false,"name":"Scott Davidoff"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["NASA-JPL, Pasadena, United States"],"email":"mauricio.a.hess.flores@jpl.nasa.gov","is_corresponding":true,"name":"Mauricio Hess-Flores"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mauricio Hess-Flores"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1144","time_end":"","time_stamp":"","time_start":"","title":"Opening the black box of 3D reconstruction error analysis with VECTOR","uid":"v-short-1144","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1146":{"abstract":"Millions of runners rely on smart watches that display running-related metrics such as pace, heart rate and distance for training and racing -- mostly with text and numbers. Although research tells us that visualizations are a good alternative to text on smart watches, we know little about how visualizations can help in realistic running scenarios. We conducted a study in which 20 runners completed running-related tasks on an outdoor track using both text and visualizations. Our results show that runners are 1.5 to 8 times faster in completing those tasks with visualizations than with text, prefer visualizations to text, and would use such visualizations while running -- were they available on their smart watch.","accessible_pdf":false,"authors":[{"affiliations":["University of Victoria, Victoria, Canada"],"email":"sarinaksj@uvic.ca","is_corresponding":false,"name":"Sarina Kashanj"},{"affiliations":["University of Victoria, Victoira, Canada","Delft University of Technology, Delft, Netherlands"],"email":"xiyao.wang23@gmail.com","is_corresponding":false,"name":"Xiyao Wang"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Charles Perin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1146","time_end":"","time_stamp":"","time_start":"","title":"Visualizations on Smart Watches while Running: It Actually Helps!","uid":"v-short-1146","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1150":{"abstract":"Exploratory visual data analysis tools empower data analysts to efficiently and intuitively explore data insights throughout the entire analysis cycle. However, the gap between common programmatic analysis (e.g., within computational notebooks) and exploratory visual analysis leads to a disjointed and inefficient data analysis experience. To bridge this gap, we developed PyGWalker, a Python library that offers on-the-fly assistance for exploratory visual data analysis. It features a lightweight and intuitive GUI with a shelf builder modality. Its loosely coupled architecture supports multiple computational environments to accommodate varying data sizes. Since its release in February 2023, PyGWalker has gained much attention, with 468k downloads on PyPI and over 9.8k stars on GitHub as of April 2024. This demonstrates its value to the data science and visualization community, with researchers and developers integrating it into their own applications and studies.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China","Kanaries Data Inc., Hangzhou, China"],"email":"yue.yu@connect.ust.hk","is_corresponding":true,"name":"Yue Yu"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"lshenaj@connect.ust.hk","is_corresponding":false,"name":"Leixian Shen"},{"affiliations":["Kanaries Data Inc., Hangzhou, China"],"email":"feilong@kanaries.net","is_corresponding":false,"name":"Fei Long"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":["Kanaries Data Inc., Hangzhou, China"],"email":"haochen@kanaries.net","is_corresponding":false,"name":"Hao Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yue Yu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1150","time_end":"","time_stamp":"","time_start":"","title":"PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis","uid":"v-short-1150","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1155":{"abstract":"Augmented reality (AR) area labels can highlight real-life objects, visualize real world regions with arbitrary boundaries, and show invisible objects or features. Environment conditions such as lighting and clutter can decrease fixed or passive label visibility, and labels that have high opacity levels can occlude crucial details in the environment. We design and evaluate active AR area label visualization modes to enhance visibility across real-life environments, while still retaining environment details within the label. For this, we define a distant characteristic color from the environment in perceptual CIELAB space, then introduce spatial variations among label pixel colors based on the underlying environment variation. In a user study with 18 participants, we discovered that our active label visualization modes can be comparable in visibility to a fixed green baseline by Gabbard et al., and can outperform it with added spatial variation in cluttered environments, across varying levels of lighting (e.g., nighttime), and in environments with colors similar to the fixed baseline color.","accessible_pdf":false,"authors":[{"affiliations":["Brown University, Providence, United States"],"email":"hojung_kwon@brown.edu","is_corresponding":false,"name":"Hojung Kwon"},{"affiliations":["Brown University, Providence, United States"],"email":"yuanbo_li@brown.edu","is_corresponding":false,"name":"Yuanbo Li"},{"affiliations":["Brown University, Providence, United States"],"email":"chloe_ye2019@hotmail.com","is_corresponding":false,"name":"Xiaohan Ye"},{"affiliations":["Brown University, Providence, United States"],"email":"praccho_muna-mcquay@brown.edu","is_corresponding":false,"name":"Praccho Muna-McQuay"},{"affiliations":["Duke University, Durham, United States"],"email":"liuren.yin@duke.edu","is_corresponding":false,"name":"Liuren Yin"},{"affiliations":["Brown University, Providence, United States"],"email":"james_tompkin@brown.edu","is_corresponding":true,"name":"James Tompkin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["James Tompkin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1155","time_end":"","time_stamp":"","time_start":"","title":"Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality","uid":"v-short-1155","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1156":{"abstract":"Compound graphs are networks in which vertices can be grouped into larger subsets, with these subsets capable of further grouping, resulting in a nesting that can be many levels deep. Such graphs arise in several applications including biological workflows, chemical equations, and computational data flow analysis. Common layouts prioritize the lowest level of the grouping, down to the individual ungrouped vertices, which can make the higher level grouped structures more difficult to discern, especially in deeply nested networks. We contribute an overview+detail layout that preserves the saliency of the higher level network structure when groups are expanded to show internal nested structure. Our layout draws inner structures adjacent to their parents, using a modified tree layout to place substructures. We describe our algorithm and then present case studies demonstrating the layout's utility to a domain expert working on data flow analysis. Finally, we discuss network parameters and analysis situations in which our layout is well suited.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"hatch.on27@gmail.com","is_corresponding":true,"name":"Chang Han"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"lieffers@arizona.edu","is_corresponding":false,"name":"Justin Lieffers"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"claytonm@arizona.edu","is_corresponding":false,"name":"Clayton Morrison"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chang Han"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1156","time_end":"","time_stamp":"","time_start":"","title":"An Overview + Detail Layout for Visualizing Compound Graphs","uid":"v-short-1156","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1159":{"abstract":"With two studies, we assess how different walking trajectories (straight line, circular, and infinity) and speeds (2 km/h, 4 km/h, and 6 km/h) influence the accuracy and response time of participants reading micro visualizations on a smartwatch. We showed our participants common watch face micro visualizations including date, time, weather information, and four complications showing progress charts of fitness data. Our findings suggest that while walking trajectories did not significantly affect reading performance, overall walking activity, especially at high speeds, hurt reading accuracy and, to some extent, response time.","accessible_pdf":false,"authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"fairouz.grioui@vis.uni-stuttgart.de","is_corresponding":true,"name":"Fairouz Grioui"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"research@blascheck.eu","is_corresponding":false,"name":"Tanja Blascheck"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"yaolijie0219@gmail.com","is_corresponding":false,"name":"Lijie Yao"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fairouz Grioui"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1159","time_end":"","time_stamp":"","time_start":"","title":"Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking","uid":"v-short-1159","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1161":{"abstract":"Digital twins are an excellent tool to model, visualize, and simulate complex systems, to understand and optimize their operation. In this work, we present the technical challenges of real-time visualization of a digital twin of the Frontier supercomputer. We show the initial prototype and current state of the twin and highlight technical design challenges of visualizing such a large High Performance Computing (HPC) system. The goal is to understand the use of augmented reality as a primary way to extract information and collaborate on digital twins of complex systems. This leverages the spatio-temporal aspect of a 3D representation of a digital twin, with the ability to view historical and real-time telemetry, triggering simulations of a system state and viewing the results, which can be augmented via dashboards for details. Finally, we discuss considerations and opportunities for augmented reality of digital twins of large-scale, parallel computers.","accessible_pdf":false,"authors":[{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"maiterthm@ornl.gov","is_corresponding":true,"name":"Matthias Maiterth"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"brewerwh@ornl.gov","is_corresponding":false,"name":"Wes Brewer"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"dewetd@ornl.gov","is_corresponding":false,"name":"Dane De Wet"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"greenwoodms@ornl.gov","is_corresponding":false,"name":"Scott Greenwood"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kumarv@ornl.gov","is_corresponding":false,"name":"Vineet Kumar"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"hinesjr@ornl.gov","is_corresponding":false,"name":"Jesse Hines"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"bouknightsl@ornl.gov","is_corresponding":false,"name":"Sedrick L Bouknight"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"wangz@ornl.gov","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Hewlett Packard Enterprise, Berkshire, United Kingdom"],"email":"tim.dykes@hpe.com","is_corresponding":false,"name":"Tim Dykes"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"fwang2@ornl.gov","is_corresponding":false,"name":"Feiyi Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Matthias Maiterth"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1161","time_end":"","time_stamp":"","time_start":"","title":"Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities","uid":"v-short-1161","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1163":{"abstract":"Integral curves have been widely used to represent and analyze various vector fields. Curve-based clustering and pattern search approaches are usually applied to aid the identification of meaningful patterns from large numbers of integral curves. However, they need not support an interactive, level-of-detail exploration of these patterns. To address this, we propose a Curve Segment Neighborhood Graph (CSNG) to capture the relationships between neighboring curve segments. This graph representation enables us to adapt the fast community detection algorithm, i.e., the Louvain algorithm, to identify individual graph communities from CSNG. Our results show that these communities often correspond to the features of the flow. To achieve a multi-level interactive exploration of the detected communities, we adapt a force-directed layout that allows users to refine and re-group communities based on their domain knowledge. We incorporate the proposed techniques into an interactive system to enable effective analysis and interpretation of complex patterns in large-scale integral curve datasets.","accessible_pdf":false,"authors":[{"affiliations":["University of Houston, Houston, United States"],"email":"nguyenpkk95@gmail.com","is_corresponding":true,"name":"Nguyen K Phan"},{"affiliations":["University of Houston, Houston, United States"],"email":"chengu@cs.uh.edu","is_corresponding":false,"name":"Guoning Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nguyen K Phan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1163","time_end":"","time_stamp":"","time_start":"","title":"Curve Segment Neighborhood-based Vector Field Exploration","uid":"v-short-1163","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1166":{"abstract":"Custom animated visualizations of large, complex datasets are helpful across many domains, but they are hard to develop. Much of the difficulty arises from maintaining visualization state across a large set of animated graphical elements that may change in number over time. We contribute Counterpoint, a framework for state management designed to help implement such visualizations in JavaScript. Using Counterpoint, developers can manipulate large collections of marks with reactive attributes that are easy to render in scalable APIs such as Canvas and WebGL. Counterpoint also helps orchestrate the entry and exit of graphical elements using the concept of a rendering \"stage.\" Through a performance evaluation, we show that Counterpoint adds minimal overhead over current high-performance rendering techniques while simplifying implementation. We also provide two examples of visualizations created using Counterpoint that illustrate its flexibility and compatibility with other visualization toolkits as well as considerations for users with disabilities. Counterpoint is open-source and available at https://github.com/cmudig/counterpoint.","accessible_pdf":false,"authors":[{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"vsivaram@andrew.cmu.edu","is_corresponding":true,"name":"Venkatesh Sivaraman"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"fje@cmu.edu","is_corresponding":false,"name":"Frank Elavsky"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"adamperer@cmu.edu","is_corresponding":false,"name":"Adam Perer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Venkatesh Sivaraman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1166","time_end":"","time_stamp":"","time_start":"","title":"Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations","uid":"v-short-1166","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1173":{"abstract":"Visualizing citation relations with network structures is widely used, but the visual complexity can make it challenging for individual researchers to navigate through them. We collected data from 18 researchers using an interface that we designed using network simplification methods and analyzed how users browsed and identified important papers. Our analysis reveals six major patterns used for identifying papers of interest, which can be categorized into three key components: Fields, Bridges, and Foundations, each viewed from two distinct perspectives: layout-oriented and connection-oriented. The connection-oriented approach was found to be more effective for selecting relevant papers, but the layout-oriented method was adopted more often, even though it led to unexpected results and user frustration. Our findings emphasize the importance of integrating these components and the necessity to balance visual layouts with meaningful connections to enhance the effectiveness of citation networks in academic browsing systems.","accessible_pdf":false,"authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"krchoe@hcil.snu.ac.kr","is_corresponding":true,"name":"Kiroong Choe"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"gracekim027@snu.ac.kr","is_corresponding":false,"name":"Eunhye Kim"},{"affiliations":["Dept. of Electrical and Computer Engineering, SNU, Seoul, Korea, Republic of"],"email":"paulmoguri@snu.ac.kr","is_corresponding":false,"name":"Sangwon Park"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kiroong Choe"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1173","time_end":"","time_stamp":"","time_start":"","title":"Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations","uid":"v-short-1173","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1177":{"abstract":"The proliferation of misleading visualizations online, particularly during critical events like public health crises and elections, poses a significant risk of misinformation. This work investigates the capability of GPT-4V to detect misleading visualizations. Utilizing a dataset of tweet-visualization pairs with various visual misleaders, we tested GPT-4V under four experimental conditions: naive zero-shot, naive few-shot, guided zero-shot, and guided few-shot. Our results demonstrate that GPT-4V can detect misleading visualizations with moderate accuracy without prior training (naive zero-shot) and that performance considerably improves by providing the model with the definitions of misleaders (guided zero-shot). However, combining definitions with examples of misleaders (guided few-shot) did not yield further improvements. This study underscores the feasibility of using large vision-language models such as GTP-4V to combat misinformation and emphasizes the importance of optimizing prompt engineering to enhance detection accuracy.","accessible_pdf":false,"authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"jhalexander@umass.edu","is_corresponding":false,"name":"Jason Huang Alexander"},{"affiliations":["University of Masssachusetts Amherst, Amherst, United States"],"email":"phnanda@umass.edu","is_corresponding":false,"name":"Priyal H Nanda"},{"affiliations":["Northeastern University, Boston, United States"],"email":"yangkc@iu.edu","is_corresponding":false,"name":"Kai-Cheng Yang"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"asarv@cs.umass.edu","is_corresponding":true,"name":"Ali Sarvghad"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ali Sarvghad"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1177","time_end":"","time_stamp":"","time_start":"","title":"Can GPT-4V Detect Misleading Visualizations?","uid":"v-short-1177","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1183":{"abstract":"An atmospheric front is an imaginary surface that separates two distinct air masses and is commonly defined as the warm-air side of a frontal zone with high gradients of atmospheric temperature and humidity. These fronts are a widely used conceptual model in meteorology, which are often encountered in the literature as two-dimensional (2D) front lines on surface analysis charts. This paper presents a method for computing three-dimensional (3D) atmospheric fronts as surfaces that is capable of extracting continuous and well-confined features suitable for 3D visual analysis, spatio-temporal tracking, and statistical analyses. Recently developed contour-based methods for 3D front extraction rely on computing the third derivative of a moist potential temperature field. Additionally, they require the field to be smoothed to obtain continuous large-scale structures. This paper demonstrates the feasibility of an alternative method to front extraction using ridge surface computation. The proposed method requires only the sec- ond derivative of the input field and produces accurate structures even from unsmoothed data. An application of the ridge-based method to a data set corresponding to Cyclone Friederike demonstrates its benefits and utility towards visual analysis of the full 3D structure of fronts.","accessible_pdf":false,"authors":[{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"anne.gossing@fu-berlin.de","is_corresponding":true,"name":"Anne Gossing"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"andreas.beckert@uni-hamburg.de","is_corresponding":false,"name":"Andreas Beckert"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"christoph.fischer-1@uni-hamburg.de","is_corresponding":false,"name":"Christoph Fischer"},{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"klenert@zib.de","is_corresponding":false,"name":"Nicolas Klenert"},{"affiliations":["Indian Institute of Science, Bangalore, India"],"email":"vijayn@iisc.ac.in","is_corresponding":false,"name":"Vijay Natarajan"},{"affiliations":["Freie Universit\u00e4t Berlin, Berlin, Germany"],"email":"george.pacey@fu-berlin.de","is_corresponding":false,"name":"George Pacey"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"thorwin.vogt@uni-hamburg.de","is_corresponding":false,"name":"Thorwin Vogt"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"marc.rautenhaus@uni-hamburg.de","is_corresponding":false,"name":"Marc Rautenhaus"},{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"baum@zib.de","is_corresponding":false,"name":"Daniel Baum"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anne Gossing"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1183","time_end":"","time_stamp":"","time_start":"","title":"A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts","uid":"v-short-1183","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1184":{"abstract":"To improve the perception of hierarchical structures in data sets, several color map generation algorithms have been proposed to take this structure into account. But the design of hierarchical color maps elicits different requirements to those of color maps for tabular data. Within this paper, we make an initial effort to put design rules from the color map literature into the context of hierarchical color maps. We investigate the impact of several design decisions and provide recommendations for various analysis scenarios. Thus, we lay the foundation for objective quality criteria to evaluate hierarchical color maps.","accessible_pdf":false,"authors":[{"affiliations":["Fraunhofer IGD, Darmstadt, Germany"],"email":"tobias.mertz@igd.fraunhofer.de","is_corresponding":true,"name":"Tobias Mertz"},{"affiliations":["Fraunhofer IGD, Darmstadt, Germany","TU Darmstadt, Darmstadt, Germany"],"email":"joern.kohlhammer@igd.fraunhofer.de","is_corresponding":false,"name":"J\u00f6rn Kohlhammer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias Mertz"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1184","time_end":"","time_stamp":"","time_start":"","title":"Towards a Quality Approach to Hierarchical Color Maps","uid":"v-short-1184","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1185":{"abstract":"The visualization and interactive exploration of geo-referenced networks poses challenges if the network's nodes are not evenly distributed. Our approach proposes new ways of realizing animated transitions for exploring such networks from an ego-perspective. We aim to reduce the required screen estate while maintaining the viewers' mental map of distances and directions. A preliminary study provides first insights of the comprehensiveness of animated geographic transitions regarding directional relationships between start and end point in different projections. Two use cases showcase how ego-perspective graph exploration can be supported using less screen space than previous approaches.","accessible_pdf":false,"authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"max@mumintroll.org","is_corresponding":true,"name":"Max Franke"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"samuel.beck@vis.uni-stuttgart.de","is_corresponding":false,"name":"Samuel Beck"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"steffen.koch@vis.uni-stuttgart.de","is_corresponding":false,"name":"Steffen Koch"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Max Franke"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1185","time_end":"","time_stamp":"","time_start":"","title":"Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks","uid":"v-short-1185","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1186":{"abstract":"Data visualizations help extract insights from datasets, but reaching these insights requires decomposing high level goals into low-level analytic tasks that can be complex due to varying degrees of data literacy and visualization experience. Recent advancements in large language models (LLMs) have shown promise for lowering barriers for users to achieve tasks such as writing code and may likewise facilitate visualization insight. Scalable Vector Graphics (SVG), a text-based image format common in data visualizations, matches well with the text sequence processing of transformer-based LLMs. In this paper, we explore the capability of LLMs to perform 10 low-level visual analytic tasks defined by Amar, Eagan, and Stasko directly on SVG-based visualizations. Using zero-shot prompts, we instruct the models to provide responses or modify the SVG code based on given visualizations. Our findings demonstrate that LLMs can effectively modify existing SVG visualizations for some tasks like Cluster but perform poorly on tasks requiring mathematical operations like Compute Derived Value. We also discovered that LLM performance can vary based on factors such as the number of data points, the presence of value labels, and the chart type. Our findings contribute to gauging the general capabilities of LLMs and highlight the need for further exploration and development to fully harness their potential in supporting visual analytic tasks.","accessible_pdf":false,"authors":[{"affiliations":["Brown University, Providence, United States"],"email":"leooooxzz@gmail.com","is_corresponding":true,"name":"Zhongzheng Xu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zhongzheng Xu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1186","time_end":"","time_stamp":"","time_start":"","title":"Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations","uid":"v-short-1186","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1188":{"abstract":"Vortices and their analysis play a critical role in the understanding of complex phenomena in turbulent flow. Traditional vortex extraction methods, notably region-based techniques, often overlook the entanglement phenomenon, resulting in the inclusion of multiple vortices within a single extracted region. Their separation is necessary for quantifying different types of vortices and their statistics. In this study, we propose a novel vortex separation method that extends the conventional contour tree-based segmentation approach with an additional step termed \u201clayering\u201d. Upon extracting a vortical region using specified vortex criteria (e.g., \u03bb2), we initially establish topological segmentation based on the contour tree, followed by the layering process to allocate appropriate segmentation IDs to unsegmented cells, thus separating individual vortices within the region. However, these regions may still suffer from inaccurate splits, which we address statistically by leveraging the continuity of vorticity lines across the split boundaries. Our findings demonstrate a significant improvement in both the separation of vortices and the mitigation of inaccurate splits compared to prior methods.","accessible_pdf":false,"authors":[{"affiliations":["University of Houston, Houston, United States"],"email":"adeelz92@gmail.com","is_corresponding":true,"name":"Adeel Zafar"},{"affiliations":["University of Houston, Houston, United States"],"email":"zpoorsha@cougarnet.uh.edu","is_corresponding":false,"name":"Zahra Poorshayegh"},{"affiliations":["University of Houston, Houston, United States"],"email":"diyang@uh.edu","is_corresponding":false,"name":"Di Yang"},{"affiliations":["University of Houston, Houston, United States"],"email":"chengu@cs.uh.edu","is_corresponding":false,"name":"Guoning Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Adeel Zafar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1188","time_end":"","time_stamp":"","time_start":"","title":"Topological Separation of Vortices","uid":"v-short-1188","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1189":{"abstract":"The information visualization research community commonly produces supporting software to demonstrate technical contributions to the field. However, developing this software tends to be an overwhelming task, the final product tends to be a research prototype without much thought for modularization and re-usability which makes it harder to replicate and adopt. This paper presents a design pattern for facilitating the creation, dissemination, and re-utilization of visualization techniques using reactive widgets. The design pattern features basic concepts that leverage modern front-end development best practices and standards, which ease development and replication. The paper presents several usage examples of the pattern, templates for implementation, and even a wrapper for facilitating the conversion of any Vega specification into a reactive widget.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, San Francisco, United States"],"email":"john.guerra@gmail.com","is_corresponding":true,"name":"John Alexis Guerra-Gomez"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["John Alexis Guerra-Gomez"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1189","time_end":"","time_stamp":"","time_start":"","title":"Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination","uid":"v-short-1189","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1191":{"abstract":"To enable data-driven decision-making across organizations, data professionals need to share insights with their colleagues in context-appropriate communication channels. Many of their colleagues rely on data but are not themselves analysts; furthermore, their colleagues are reluctant or unable to use dedicated analytical applications or dashboards, and they expect communication to take place within threaded collaboration platforms such as Slack or Microsoft Teams. In this paper, we introduce a set of six strategies for adapting content from business intelligence (BI) dashboards into appropriate formats for sharing on collaboration platforms, formats that we refer to as dashboard snapshots. Informed by prior studies of enterprise communication around data, these strategies go beyond redesigning or restyling by considering varying levels of data literacy across an organization, introducing affordances for self-service question-answering, and anticipating the post-sharing lifecycle of data artifacts. These strategies involve the use of templates that are matched to common communicative intents, serving to reduce the workload of data professionals. We contribute a formal representation of these strategies and demonstrate their applicability in a comprehensive enterprise communication scenario featuring multiple stakeholders that unfolds over the span of months.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"hyeokkim2024@u.northwestern.edu","is_corresponding":true,"name":"Hyeok Kim"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"arjun.srinivasan.10@gmail.com","is_corresponding":false,"name":"Arjun Srinivasan"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"mbrehmer@uwaterloo.ca","is_corresponding":false,"name":"Matthew Brehmer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hyeok Kim"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1191","time_end":"","time_stamp":"","time_start":"","title":"Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms","uid":"v-short-1191","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1192":{"abstract":"Narrative visualization has become a crucial tool in data presentation, merging storytelling with data visualization to convey complex information in an engaging and accessible manner. In this study, we review the design space for narrative visualizations, focusing on animation style, through a comprehensive analysis of 71 papers from key visualization venues. We categorize these papers into six broad themes: Animation Style, Interactivity, Technology Usage, Methodology Development, Evaluation Type, and Application Domain. Our findings reveal a significant evolution in the field, marked by a growing preference for animated and non-interactive techniques. This trend reflects a shift towards minimizing user interaction while enhancing the clarity and impact of data presentation. We also identified key trends and technologies that have shaped the field, highlighting the role of technologies, such as machine learning in driving these changes. We offer insights into the dynamic interrelations within the narrative visualization domain, suggesting a future research trajectory that balances interactivity with automated tools to foster increased engagement. Our work lays the groundwork for future approaches for effective and innovative narrative visualization in diverse applications.","accessible_pdf":false,"authors":[{"affiliations":["Louisiana State University, Baton Rouge, United States"],"email":"jyang44@lsu.edu","is_corresponding":true,"name":"Vyri Junhan Yang"},{"affiliations":["Louisiana State University, Baton Rouge, United States"],"email":"mjasim@lsu.edu","is_corresponding":false,"name":"Mahmood Jasim"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Vyri Junhan Yang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1192","time_end":"","time_stamp":"","time_start":"","title":"Animating the Narrative: A Review of Animation Styles in Narrative Visualization","uid":"v-short-1192","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1193":{"abstract":"We present LinkQ, a system that leverages a large language model (LLM) to facilitate knowledge graph (KG) query construction through natural language question-answering. Traditional approaches often require detailed knowledge of complex graph querying languages, limiting the ability for users -- even experts -- to acquire valuable insights from KG data. LinkQ simplifies this process by first interpreting a user's question, then converting it into a well-formed KG query. By using the LLM to construct a query instead of directly answering the user's question, LinkQ guards against the LLM hallucinating or generating false, erroneous information. By integrating an LLM into LinkQ, users are able to conduct both exploratory and confirmatory data analysis, with the LLM helping to iteratively refine open-ended questions into precise ones. To demonstrate the efficacy of LinkQ, we conducted a qualitative study with five KG practitioners and distill their feedback. Our results indicate that practitioners find LinkQ effective for KG question-answering, and desire future LLM-assisted systems for the exploratory analysis of graph databases.","accessible_pdf":false,"authors":[{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"harry.li@ll.mit.edu","is_corresponding":true,"name":"Harry Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"ashley.suh@ll.mit.edu","is_corresponding":false,"name":"Ashley Suh"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Harry Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1193","time_end":"","time_stamp":"","time_start":"","title":"LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering","uid":"v-short-1193","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1199":{"abstract":"In the digital landscape, the ubiquity of data visualizations in media underscores the necessity for accessibility to ensure inclusivity for all users, including those with visual impairments. Current visual content often fails to cater to the needs of screen reader users due to the absence of comprehensive textual descriptions. To address this gap, we propose in this paper a framework designed to empower media content creators to transform charts into descriptive narratives. This tool not only facilitates the understanding of complex visual data through text but also fosters a broader awareness of accessibility in digital content creation. Through the application of this framework, users can interpret and convey the insights of data visualizations more effectively, accommodating a diverse audience. Our evaluations reveal that this tool not only enhances the comprehension of data visualizations but also promotes new perspectives on the represented data, thereby broadening the interpretative possibilities for all users.","accessible_pdf":false,"authors":[{"affiliations":["Polytechnique Montr\u00e9al, Montr\u00e9al, Canada"],"email":"qiangxu1204@gmail.com","is_corresponding":true,"name":"Qiang Xu"},{"affiliations":["Polytechnique Montreal, Montreal, Canada"],"email":"thomas.hurtut@polymtl.ca","is_corresponding":false,"name":"Thomas Hurtut"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qiang Xu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1199","time_end":"","time_stamp":"","time_start":"","title":"From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions","uid":"v-short-1199","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1207":{"abstract":"An essential task of an air traffic controller is to manage the traffic flow by predicting future trajectories. Complex traffic patterns are difficult to predict and manage and impose cognitive load on the air traffic controllers. In this work we present an interactive visual analytics interface which facilitates detection and resolution of complex traffic patterns for air traffic controllers. The interface supports the users in detecting complex clusters of aircraft and uses visual representations to communicate to the controllers how and propose re-routing. The interface further enables the ATCos to visualize and simultaneously compare how different re-routing strategies for each individual aircraft yield reduction of complexity in the entire sector for the next hour. The development of the concepts was supported by the domain-specific feedback we received from six fully licensed and operational air traffic controllers in an iterative design process over a period of 14 months.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"elmira.zohrevandi@liu.se","is_corresponding":true,"name":"Elmira Zohrevandi"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"katerina.vrotsou@liu.se","is_corresponding":false,"name":"Katerina Vrotsou"},{"affiliations":["Institute of Science and Technology, Norrk\u00f6ping, Sweden","Institute of Science and Technology, Norrk\u00f6ping, Sweden"],"email":"carl.westin@liu.se","is_corresponding":false,"name":"Carl A. L. Westin"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"jonas.lundberg@liu.se","is_corresponding":false,"name":"Jonas Lundberg"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"anders.ynnerman@liu.se","is_corresponding":false,"name":"Anders Ynnerman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Elmira Zohrevandi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1207","time_end":"","time_stamp":"","time_start":"","title":"Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity","uid":"v-short-1207","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1211":{"abstract":"Transfer function design is crucial in volume rendering, as it directly influences the visual representation and interpretation of volumetric data. However, creating effective transfer functions that align with users\u2019 visual objectives is often challenging due to the complex parameter space and the semantic gap between transfer function values and features of interest within the volume. In this work, we propose a novel approach that leverages recent advancements in language-vision models to bridge this semantic gap. By employing a fully differentiable rendering pipeline and an image-based loss function guided by language descriptions, our method generates transfer functions that yield volume-rendered images closely matching the user\u2019s intent. We demonstrate the effectiveness of our approach in creating meaningful transfer functions from simple descriptions, empowering users to intuitively express their desired visual outcomes with minimal effort. This advancement streamlines the transfer function design process and makes volume rendering more accessible to a broader range of users.","accessible_pdf":false,"authors":[{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"sangwon.jeong@vanderbilt.edu","is_corresponding":true,"name":"Sangwon Jeong"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["Lawrence Livermore National Laboratory , Livermore, United States"],"email":"shusenl@sci.utah.edu","is_corresponding":false,"name":"Shusen Liu"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"matthew.berger@vanderbilt.edu","is_corresponding":false,"name":"Matthew Berger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sangwon Jeong"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1211","time_end":"","time_stamp":"","time_start":"","title":"Text-based transfer function design for semantic volume rendering","uid":"v-short-1211","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1224":{"abstract":"Diffusion-based generative models\u2019 impressive ability to create convincing images has garnered global attention. However, their complex structures and operations often pose challenges for non-experts to grasp. We present Diffusion Explainer, the first interactive visualization tool that explains how Stable Diffusion transforms text prompts into images. Diffusion Explainer tightly integrates a visual overview of Stable Diffusion\u2019s complex structure with explanations of the underlying operations. By comparing image generation of prompt variants, users can discover the impact of keyword changes on image generation. A 56-participant user study demonstrates that Diffusion Explainer offers substantial learning benefits to non-experts. Our tool has been used by over 10,300 users from 124 countries at https://poloclub.github.io/diffusion-explainer/.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"seongmin@gatech.edu","is_corresponding":true,"name":"Seongmin Lee"},{"affiliations":["GA Tech, Atlanta, United States","IBM Research AI, Cambridge, United States"],"email":"benjamin.hoover@ibm.com","is_corresponding":false,"name":"Benjamin Hoover"},{"affiliations":["IBM Research AI, Cambridge, United States"],"email":"hendrik@strobelt.com","is_corresponding":false,"name":"Hendrik Strobelt"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"jayw@gatech.edu","is_corresponding":false,"name":"Zijie J. Wang"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"speng65@gatech.edu","is_corresponding":false,"name":"ShengYun Peng"},{"affiliations":["Georgia Institute of Technology , Atlanta , United States"],"email":"apwright@gatech.edu","is_corresponding":false,"name":"Austin P Wright"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"kevin.li@gatech.edu","is_corresponding":false,"name":"Kevin Li"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"haekyu@gatech.edu","is_corresponding":false,"name":"Haekyu Park"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"alexanderyang@gatech.edu","is_corresponding":false,"name":"Haoyang Yang"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"polo@gatech.edu","is_corresponding":false,"name":"Duen Horng (Polo) Chau"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Seongmin Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1224","time_end":"","time_stamp":"","time_start":"","title":"Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion","uid":"v-short-1224","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1235":{"abstract":"A high number of samples often leads to occlusion in scatterplots, which hinders data perception and analysis. De-cluttering approaches based on spatial transformation reduce visual clutter by remapping samples using the entire available scatterplot domain. Such regularized scatterplots may still be used for data analysis tasks, if the spatial transformation is smooth and preserves the original neighborhood relations of samples. Recently, Rave et al. proposed an efficient regularization method based on integral images. We propose a generalization of their regularization scheme using sector-based transformations with the aim of increasing sample uniformity of the resulting scatterplot. We document the improvement of our approach using various uniformity measures.","accessible_pdf":false,"authors":[{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"hennes.rave@uni-muenster.de","is_corresponding":true,"name":"Hennes Rave"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"molchano@uni-muenster.de","is_corresponding":false,"name":"Vladimir Molchanov"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"linsen@uni-muenster.de","is_corresponding":false,"name":"Lars Linsen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hennes Rave"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1235","time_end":"","time_stamp":"","time_start":"","title":"Uniform Sample Distribution in Scatterplots via Sector-based Transformation","uid":"v-short-1235","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1236":{"abstract":"Automatically generating data visualizations in response to human utterances on datasets necessitates a deep semantic understanding of the data utterance, including implicit and explicit references to data attributes, visualization tasks, and necessary data preparation steps. Natural Language Interfaces (NLIs) for data visualization have explored ways to infer such information, yet challenges persist due to inherent uncertainty in human speech. Recent advances in Large Language Models (LLMs) provide an avenue to address these challenges, but their ability to extract the relevant semantic information remains unexplored. In this study, we evaluate four publicly available LLMs (GPT-4, Gemini-Pro, Llama3, and Mixtral), investigating their ability to comprehend utterances even in the presence of uncertainty and identify the relevant data context and visual tasks. Our findings reveal that LLMs are sensitive to uncertainties in utterance. Despite this sensitivity, they are able to extract the relevant data context. However, LLMs struggle with inferring visualization tasks. Based on these results, we highlight future research directions on using LLMs for visualization generation. Our supplementary materials have been shared on OSF: https://osf.io/j342a/wiki/home/?view_only=b4051ffc6253496d9bce818e4a89b9f9","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"hbako@umd.edu","is_corresponding":true,"name":"Hannah K. Bako"},{"affiliations":["University of Maryland, College Park, United States"],"email":"arshnoorbhutani8@gmail.com","is_corresponding":false,"name":"Arshnoor Bhutani"},{"affiliations":["The University of Texas at Austin, Austin, United States"],"email":"xinyi.liu@utexas.edu","is_corresponding":false,"name":"Xinyi Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"kcobbina@cs.umd.edu","is_corresponding":false,"name":"Kwesi Adu Cobbina"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hannah K. Bako"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1236","time_end":"","time_stamp":"","time_start":"","title":"Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization","uid":"v-short-1236","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1248":{"abstract":"Statistical practices such as building regression models or running hypothesis tests rely on following rigorous procedures of steps and verifying assumptions on data to produce valid results. However, common statistical tools do not verify users\u2019 decision choices and provide low-level statistical functions without instructions on the whole analysis practice. Users can easily misuse analysis methods, potentially decreasing the validity of results. To address this problem, we introduce GuidedStats, an interactive interface within computational notebooks that encapsulates guidance, models, visualization, and exportable results into interactive workflows. It breaks down typical analysis processes, such as linear regression and two-sample T-tests, into interactive steps supplemented with automatic visualizations and explanations for step-wise evaluation. Users can iterate on input choices to refine their models, while recommended actions and exports allow the user to continue their analysis in code. Case studies show how GuidedStats offers valuable instructions for conducting fluid statistical analyses while finding possible assumption violations in the underlying data, supporting flexible and accurate statistical analyses.","accessible_pdf":false,"authors":[{"affiliations":["New York University, New York, United States"],"email":"yz9381@nyu.edu","is_corresponding":true,"name":"Yuqi Zhang"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"adamperer@cmu.edu","is_corresponding":false,"name":"Adam Perer"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"willepp@cmu.edu","is_corresponding":false,"name":"Will Epperson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuqi Zhang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1248","time_end":"","time_stamp":"","time_start":"","title":"Guided Statistical Workflows with Interactive Explanations and Assumption Checking","uid":"v-short-1248","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1264":{"abstract":"The Local Moran's I statistic is a valuable tool for identifying localized patterns of spatial autocorrelation. Understanding these patterns is crucial in spatial analysis, but interpreting the statistic can be difficult. To simplify this process, we introduce three novel visualizations that enhance the interpretation of Local Moran's I results. These visualizations can be interactively linked to one another, and to established visualizations, to offer a more holistic exploration of the results. We provide a JavaScript library with implementations of these new visual elements, along with a web dashboard that demonstrates their integrated use.","accessible_pdf":false,"authors":[{"affiliations":["NIH, Rockville, United States","Queen's University, Belfast, United Kingdom"],"email":"masonlk@nih.gov","is_corresponding":true,"name":"Lee Mason"},{"affiliations":["Queen's University Belfast , Belfast , United Kingdom"],"email":"b.hicks@qub.ac.uk","is_corresponding":false,"name":"Bl\u00e1naid Hicks"},{"affiliations":["National Institutes of Health, Rockville, United States"],"email":"jonas.dealmeida@nih.gov","is_corresponding":false,"name":"Jonas S Almeida"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lee Mason"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1264","time_end":"","time_stamp":"","time_start":"","title":"Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation","uid":"v-short-1264","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1274":{"abstract":"This study examines the impact of positive and negative contrast polarities (i.e., light and dark modes) on the performance of younger adults and people in their late adulthood (PLA). In a crowdsourced study with 134 participants (69 below age 60, 66 aged 60 and above), we assessed their accuracy and time performing analysis tasks across three common visualization types (Bar, Line, Scatterplot) and two contrast polarities (positive and negative). We observed that, across both age groups, the polarity that led to better performance and the resulting amount of improvement varied on an individual basis, with each polarity benefiting comparable proportions of participants. Additionally, we observed that the choice of contrast polarity can have an impact on time similar to that of the choice of visualization type, resulting in an average percent difference of around 36%. These findings indicate that, overall, the effects of contrast polarity on visual analysis performance do not noticeably change with age. Furthermore, they underscore the importance of making visualizations available in both contrast polarities to better-support a broad audience with differing needs.","accessible_pdf":false,"authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"zwhile@cs.umass.edu","is_corresponding":true,"name":"Zack While"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"asarv@cs.umass.edu","is_corresponding":false,"name":"Ali Sarvghad"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zack While"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1274","time_end":"","time_stamp":"","time_start":"","title":"Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups","uid":"v-short-1274","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1276":{"abstract":"Machine Learning models for chart-grounded Q&A (CQA) often treat charts as images, but performing CQA on pixel values has proven challenging. We thus investigate a resource overlooked by current ML-based approaches: the declarative documents describing how charts should visually encode data (i.e., chart specifications). In this work, we use chart specifications to enhance language models (LMs) for chart-reading tasks, such that the resulting system can robustly understand language for CQA. Through a case study with 359 bar charts, we test novel fine tuning schemes on both GPT-3 and T5 using a new dataset curated for two CQA tasks: question-answering and visual explanation generation. Our text-only approaches strongly outperform vision-based GPT-4 on explanation generation (99% vs. 63% accuracy), and show promising results for question-answering (57-67% accuracy). Through in-depth experiments, we also show that our text-only approaches are mostly robust to natural language variation.","accessible_pdf":false,"authors":[{"affiliations":["Adobe Research, San Jose, United States"],"email":"victorbursztyn2022@u.northwestern.edu","is_corresponding":true,"name":"Victor S. Bursztyn"},{"affiliations":["Adobe Research, Seattle, United States"],"email":"jhoffs@adobe.com","is_corresponding":false,"name":"Jane Hoffswell"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"sguo@adobe.com","is_corresponding":false,"name":"Shunan Guo"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"eunyee@adobe.com","is_corresponding":false,"name":"Eunyee Koh"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Victor S. Bursztyn"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1276","time_end":"","time_stamp":"","time_start":"","title":"Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts","uid":"v-short-1276","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1277":{"abstract":"Trust is a subjective yet fundamental component of human-computer interaction, and is a determining factor in shaping the efficacy of data visualizations. Prior research has identified five dimensions of trust assessment in visualizations (credibility, clarity, reliability, familiarity, and confidence), and observed that these dimensions tend to vary predictably along with certain features of the visualization being evaluated. This raises a further question: how do the design features driving viewers' trust assessment vary with the characteristics of the viewers themselves? By reanalyzing data from these studies through the lens of individual differences, we build a more detailed map of the relationships between design features, individual characteristics, and trust behaviors. In particular, we model the distinct contributions of endogenous design features (such as visualization type, or the use of color) and exogenous user characteristics (such as visualization literacy), as well as the interactions between them. We then use these findings to make recommendations for individualized and adaptive visualization design.","accessible_pdf":false,"authors":[{"affiliations":["Smith College, Northampton, United States"],"email":"jcrouser@smith.edu","is_corresponding":true,"name":"R. Jordan Crouser"},{"affiliations":["Smith College, Northampton, United States"],"email":"cmatoussi@smith.edu","is_corresponding":false,"name":"Syrine Matoussi"},{"affiliations":["Smith College, Northampton, United States"],"email":"ekung@smith.edu","is_corresponding":false,"name":"Lan Kung"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"p.saugat@wustl.edu","is_corresponding":false,"name":"Saugat Pandey"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"m.oen@wustl.edu","is_corresponding":false,"name":"Oen G McKinley"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"alvitta@wustl.edu","is_corresponding":false,"name":"Alvitta Ottley"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["R. Jordan Crouser"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1277","time_end":"","time_stamp":"","time_start":"","title":"Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization","uid":"v-short-1277","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1285":{"abstract":"This study examines the impact of social-comparison risk visualizations on public health communication, comparing the effects of traditional bar charts against alternative jitter plots emphasizing geographic variability (geo jitter). The research highlights that whereas both visualization types increased perceived vulnerability, behavioral intent, and policy support, the geo jitter plots were significantly more effective in reducing unjustified personal attributions. Importantly, the findings also underscore the emotional challenges faced by visualization viewers from marginalized communities, indicating a need for designs that are sensitive to the potential for reinforcing stereotypes or eliciting negative emotions. This work suggests a strategic reevaluation of visual communication tools in public health to enhance understanding and engagement without contributing to negative attributions or emotional distress.","accessible_pdf":false,"authors":[{"affiliations":["3iap, Raleigh, United States"],"email":"eli@3iap.com","is_corresponding":false,"name":"Eli Holder"},{"affiliations":["Northeastern University, Boston, United States","University of California Merced, Merced, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":true,"name":"Lace M. Padilla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lace M. Padilla"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1285","time_end":"","time_stamp":"","time_start":"","title":"\"Must Be a Tuesday\": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities","uid":"v-short-1285","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1292":{"abstract":"Collaborative planning for congenital heart diseases typically involves creating physical heart models through 3D printing, which are then examined by both surgeons and cardiologists. Recent developments in mobile augmented reality (AR) technologies have presented a viable alternative, known for their ease of use and portability. However, there is still a lack of research examining the utilization of multi-user mobile AR environments to support collaborative planning for cardiovascular surgeries. We created ARCollab, an iOS AR app designed for enabling multiple surgeons and cardiologists to interact with a patient's 3D heart model in a shared environment. ARCollab enables surgeons and cardiologists to import heart models, manipulate them through gestures and collaborate with other users, eliminating the need for fabricating physical heart models. Our evaluation of ARCollab's usability and usefulness in enhancing collaboration, conducted with three cardiothoracic surgeons and two cardiologists, marks the first human evaluation of a multi-user mobile AR tool for surgical planning. ARCollab is open-source, available at https://github.com/poloclub/arcollab.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"pratham.mehta001@gmail.com","is_corresponding":true,"name":"Pratham Darrpan Mehta"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"rnarayanan39@gatech.edu","is_corresponding":false,"name":"Rahul Ozhur Narayanan"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"harsha5431@gmail.com","is_corresponding":false,"name":"Harsha Karanth"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"alexanderyang@gatech.edu","is_corresponding":false,"name":"Haoyang Yang"},{"affiliations":["Emory University, Atlanta, United States"],"email":"slesnickt@kidsheart.com","is_corresponding":false,"name":"Timothy C Slesnick"},{"affiliations":["Emory University/Children's Healthcare of Atlanta, Atlanta, United States"],"email":"fawwaz.shaw@choa.org","is_corresponding":false,"name":"Fawwaz Shaw"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"polo@gatech.edu","is_corresponding":false,"name":"Duen Horng (Polo) Chau"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Pratham Darrpan Mehta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1292","time_end":"","time_stamp":"","time_start":"","title":"Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning","uid":"v-short-1292","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-short-1301":{"abstract":"Reactionary delay'' is a result of the accumulated cascading effects of knock-on train delays. It is becoming an increasing problem as shared railway infrastructure becomes more crowded. The chaotic nature of its effects is notoriously hard to predict. We use a stochastic Monte-Carto-style simulation of reactionary delay that produces whole distributions of likely reactionary delay. Our contribution is the demonstrating how Zoomable GlyphTables -- case-by-variable tables in which cases are rows, variables are columns, variables are complex composite metrics that incorporate distributions, and cells contain mini-charts that depict these as different level of detail through zoom interaction -- help interpret these results for helping understanding the causes and effects of reactionary delay and how they have been informing timetable robustness testing and tweaking. We describe our design principles, demonstrate how this supported our analytical tasks and we reflect on wider potential for Zoomable GlyphTables to be used more widely.","accessible_pdf":false,"authors":[{"affiliations":["City, University of London, London, United Kingdom"],"email":"a.slingsby@city.ac.uk","is_corresponding":true,"name":"Aidan Slingsby"},{"affiliations":["Risk Solutions, Warrington, United Kingdom"],"email":"jonathan.hyde@risksol.co.uk","is_corresponding":false,"name":"Jonathan Hyde"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Aidan Slingsby"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1301","time_end":"","time_stamp":"","time_start":"","title":"Zoomable Glyph Tables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays","uid":"v-short-1301","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20223193756":{"abstract":"Information visualization uses various types of representations to encode data into graphical formats. Prior work on visualization techniques has evaluated the accuracy of perceived numerical data values from visual data encodings such as graphical position, length, orientation, size, and color. Our work aims to extend the research of graphical perception to the use of motion as data encodings for quantitative values. We present two experiments implementing multiple fundamental aspects of motion such as type, speed, and synchronicity that can be used for numerical value encoding as well as comparing motion to static visual encodings in terms of user perception and accuracy. We studied how well users can assess the differences between several types of motion and static visual encodings and present an updated ranking of accuracy for quantitative judgments. Our results indicate that non-synchronized motion can be interpreted more quickly and more accurately than synchronized motion. Moreover, our ranking of static and motion visual representations shows that motion, especially expansion and translational types, has great potential as a data encoding technique for quantitative value. Finally, we discuss the implications for the use of animation and motion for numerical representations in data visualization.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Shaghayegh Esmaeili"},{"affiliations":"","email":"","is_corresponding":false,"name":"Samia Kabir"},{"affiliations":"","email":"","is_corresponding":false,"name":"Anthony M. Colas"},{"affiliations":"","email":"","is_corresponding":false,"name":"Rhema P. Linder"},{"affiliations":"","email":"","is_corresponding":false,"name":"Eric D. Ragan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shaghayegh Esmaeili"],"doi":"10.1109/TVCG.2022.3193756","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Information visualization, animation and motion-related techniques, empirical study, graphical perception, evaluation."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20223193756","time_end":"","time_stamp":"","time_start":"","title":"Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding","uid":"v-tvcg-20223193756","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20223229017":{"abstract":"We present V-Mail, a framework of cross-platform applications, interactive techniques, and communication protocols for improved multi-person correspondence about spatial 3D datasets. Inspired by the daily use of e-mail, V-Mail seeks to enable a similar style of rapid, multi-person communication accessible on any device; however, it aims to do this in the new context of spatial 3D communication, where limited access to 3D graphics hardware typically prevents such communication. The approach integrates visual data storytelling with data exploration, spatial annotations, and animated transitions. V-Mail ``data stories'' are exported in a standard video file format to establish a common baseline level of access on (almost) any device. The V-Mail framework also includes a series of complementary client applications and plugins that enable different degrees of story co-authoring and data exploration, adjusted automatically to match the capabilities of various devices. A lightweight, phone-based V-Mail app makes it possible to annotate data by adding captions to the video. These spatial annotations are then immediately accessible to team members running high-end 3D graphics visualization systems that also include a V-Mail client, implemented as a plugin. Results and evaluation from applying V-Mail to assist communication within an interdisciplinary science team studying Antarctic ice sheets confirm the utility of the asynchronous, cross-platform collaborative framework while also highlighting some current limitations and opportunities for future work.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Jung Who Nam"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":"","email":"","is_corresponding":false,"name":"Daniel F. Keefe"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jung Who Nam"],"doi":"10.1109/TVCG.2022.3229017","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Human-computer interaction, visualization of scientific 3D data, communication, storytelling, immersive analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20223229017","time_end":"","time_stamp":"","time_start":"","title":"V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices","uid":"v-tvcg-20223229017","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233261320":{"abstract":"In recent years, narrative visualization has gained much attention. Researchers have proposed different design spaces for various narrative visualization genres and scenarios to facilitate the creation process. As users' needs grow and automation technologies advance, increasingly more tools have been designed and developed. In this study, we summarized six genres of narrative visualization (annotated charts, infographics, timelines & storylines, data comics, scrollytelling & slideshow, and data videos) based on previous research and four types of tools (design spaces, authoring tools, ML/AI-supported tools and ML/AI-generator tools) based on the intelligence and automation level of the tools. We surveyed 105 papers and tools to study how automation can progressively engage in visualization design and narrative processes to help users easily create narrative visualizations. This research aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Qing Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Shixiong Cao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiazhe Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nan Cao"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qing Chen"],"doi":"10.1109/TVCG.2023.3261320","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Automatic Visualization, Narrative Visualization, Design Space, Authoring Tools, Survey"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233261320","time_end":"","time_stamp":"","time_start":"","title":"How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools","uid":"v-tvcg-20233261320","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233275925":{"abstract":"A contiguous area cartogram is a geographic map in which the area of each region is proportional to numerical data (e.g., population size) while keeping neighboring regions connected. In this study, we investigated whether value-to-area legends (square symbols next to the values represented by the squares' areas) and grid lines aid map readers in making better area judgments. We conducted an experiment to determine the accuracy, speed, and confidence with which readers infer numerical data values for the mapped regions. We found that, when only informed about the total numerical value represented by the whole cartogram without any legend, the distribution of estimates for individual regions was centered near the true value with substantial spread. Legends with grid lines significantly reduced the spread but led to a tendency to underestimate the values. Comparing differences between regions or between cartograms revealed that legends and grid lines slowed the estimation without improving accuracy. However, participants were more likely to complete the tasks when legends and grid lines were present, particularly when the area units represented by these features could be interactively selected. We recommend considering the cartogram's use case and purpose before deciding whether to include grid lines or an interactive legend.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Kelvin L. T. Fung"},{"affiliations":"","email":"","is_corresponding":false,"name":"Simon T. Perrault"},{"affiliations":"","email":"","is_corresponding":false,"name":"Michael T. Gastner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Gastner"],"doi":"10.1109/TVCG.2023.3275925","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Task Analysis, Symbols, Data Visualization, Sociology, Visualization, Switches, Mice, Cartogram, Geovisualization, Interactive Data Exploration, Quantitative Evaluation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233275925","time_end":"","time_stamp":"","time_start":"","title":"Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms","uid":"v-tvcg-20233275925","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233287585":{"abstract":"Data visualization and journalism are deeply connected. From early infographics to recent data-driven storytelling, visualization has become an integrated part of contemporary journalism, primarily as a communication artifact to inform the general public. Data journalism, harnessing the power of data visualization, has emerged as a bridge between the growing volume of data and our society. Visualization research that centers around data storytelling has sought to understand and facilitate such journalistic endeavors. However, a recent metamorphosis in journalism has brought broader challenges and opportunities that extend beyond mere communication of data. We present this article to enhance our understanding of such transformations and thus broaden visualization research's scope and practical contribution to this evolving field. We first survey recent significant shifts, emerging challenges, and computational practices in journalism. We then summarize six roles of computing in journalism and their implications. Based on these implications, we provide propositions for visualization research concerning each role. Ultimately, by mapping the roles and propositions onto a proposed ecological model and contextualizing existing visualization research, we surface seven general topics and a series of research agendas that can guide future visualization research at this intersection.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yu Fu"},{"affiliations":"","email":"","is_corresponding":false,"name":"John Stasko"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yu Fu"],"doi":"10.1109/TVCG.2023.3287585","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Computational journalism, data visualization, data-driven storytelling, journalism"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233287585","time_end":"","time_stamp":"","time_start":"","title":"More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism","uid":"v-tvcg-20233287585","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233289292":{"abstract":"Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Cindy Xiong Bearfield"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chase Stokes"},{"affiliations":"","email":"","is_corresponding":false,"name":"Andrew Lovett"},{"affiliations":"","email":"","is_corresponding":false,"name":"Steven Franconeri"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Cindy Xiong Bearfield"],"doi":"10.1109/TVCG.2023.3289292","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["comparison, perception, visual grouping, bar charts, verbal conclusions."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233289292","time_end":"","time_stamp":"","time_start":"","title":"What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts","uid":"v-tvcg-20233289292","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233299602":{"abstract":"Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Sungwon In"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tica Lin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chris North"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hanspeter Pfister"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yalong Yang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sungwon In"],"doi":"10.1109/TVCG.2023.3299602","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Immersive Analytics, Data Transformation, Data Science, Interaction, Empirical Study, Virtual/Augmented/Mixed Reality"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233299602","time_end":"","time_stamp":"","time_start":"","title":"This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality","uid":"v-tvcg-20233299602","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233302308":{"abstract":"We visualize the predictions of multiple machine learning models to help biologists as they interactively make decisions about cell lineage---the development of a (plant) embryo from a single ovum cell. Based on a confocal microscopy dataset, traditionally biologists manually constructed the cell lineage, starting from this observation and reasoning backward in time to establish their inheritance. To speed up this tedious process, we make use of machine learning (ML) models trained on a database of manually established cell lineages to assist the biologist in cell assignment. Most biologists, however, are not familiar with ML, nor is it clear to them which model best predicts the embryo's development. We thus have developed a visualization system that is designed to support biologists in exploring and comparing ML models, checking the model predictions, detecting possible ML model mistakes, and deciding on the most likely embryo development. To evaluate our proposed system, we deployed our interface with six biologists in an observational study. Our results show that the visual representations of machine learning are easily understandable, and our tool, LineageD+, could potentially increase biologists' working efficiency and enhance the understanding of embryos.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Jiayi Hong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ross Maciejewski"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alain Trubuil"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tobias Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jiayi Hong"],"doi":"10.1109/TVCG.2023.3302308","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, visual analytics, machine learning, comparing ML predictions, human-AI teaming, plant biology, cell lineage"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233302308","time_end":"","time_stamp":"","time_start":"","title":"Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage","uid":"v-tvcg-20233302308","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233306356":{"abstract":"A multitude of studies have been conducted on graph drawing, but many existing methods only focus on optimizing a single aesthetic aspect of graph layouts. There are a few existing methods that attempt to develop a flexible solution for optimizing different aesthetic aspects measured by different aesthetic criteria. Furthermore, thanks to the significant advance in deep learning techniques, several deep learning-based layout methods were proposed recently, which have demonstrated the advantages of the deep learning approaches for graph drawing. However, none of these existing methods can be directly applied to optimizing non-differentiable criteria without special accommodation. In this work, we propose a novel Generative Adversarial Network (GAN) based deep learning framework for graph drawing, called SmartGD, which can optimize any quantitative aesthetic goals even though they are non-differentiable. In the cases where the aesthetic goal is too abstract to be described mathematically, SmartGD can draw graphs in a similar style as a collection of good layout examples, which might be selected by humans based on the abstract aesthetic goal. To demonstrate the effectiveness and efficiency of SmartGD, we conduct experiments on minimizing stress, minimizing edge crossing, maximizing crossing angle, and a combination of multiple aesthetics. Compared with several popular graph drawing algorithms, the experimental results show that SmartGD achieves good performance both quantitatively and qualitatively.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Xiaoqi Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kevin Yen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yifan Hu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xiaoqi Wang"],"doi":"10.1109/TVCG.2023.3306356","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233306356","time_end":"","time_stamp":"","time_start":"","title":"SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals","uid":"v-tvcg-20233306356","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233310019":{"abstract":"The dynamic network visualization design space consists of two major dimensions: network structural and temporal representation. As more techniques are developed and published, a clear need for evaluation and experimental comparisons between them emerges. Most studies explore the temporal dimension and diverse interaction techniques supporting the participants, focusing on a single structural representation. Empirical evidence about performance and preference for different visualization approaches is scattered over different studies, experimental settings, and tasks. This paper aims to comprehensively investigate the dynamic network visualization design space in two evaluations. First, a controlled study assessing participants' response times, accuracy, and preferences for different combinations of network structural and temporal representations on typical dynamic network exploration tasks, with and without the support of standard interaction methods. Second, the best-performing combinations from the first study are enhanced based on participants' feedback and evaluated in a heuristic-based qualitative study with visualization experts on a real-world network. Our results highlight node-link with animation and playback controls as the best-performing combination and the most preferred based on ratings. Matrices achieve similar performance to node-link in the first study but have considerably lower scores in our second evaluation. Similarly, juxtaposition exhibits evident scalability issues in more realistic analysis contexts.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Velitchko Filipov"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alessio Arleo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Markus B\u00f6gl"},{"affiliations":"","email":"","is_corresponding":false,"name":"Silvia Miksch"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Velitchko Filipov"],"doi":"10.1109/TVCG.2023.3310019","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233310019","time_end":"","time_stamp":"","time_start":"","title":"On Network Structural and Temporal Encodings: A Space and Time Odyssey","uid":"v-tvcg-20233310019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233316469":{"abstract":"Automated visualization recommendation facilitates the rapid creation of effective visualizations, which is especially beneficial for users with limited time and limited knowledge of data visualization. There is an increasing trend in leveraging machine learning (ML) techniques to achieve an end-to-end visualization recommendation. However, existing ML-based approaches implicitly assume that there is only one appropriate visualization for a specific dataset, which is often not true for real applications. Also, they often work like a black box, and are difficult for users to understand the reasons for recommending specific visualizations. To fill the research gap, we propose AdaVis, an adaptive and explainable approach to recommend one or multiple appropriate visualizations for a tabular dataset. It leverages a box embedding-based knowledge graph to well model the possible one-to-many mapping relations among different entities (i.e., data features, dataset columns, datasets, and visualization choices). The embeddings of the entities and relations can be learned from dataset-visualization pairs. Also, AdaVis incorporates the attention mechanism into the inference framework. Attention can indicate the relative importance of data features for a dataset and provide fine-grained explainability. Our extensive evaluations through quantitative metric evaluations, case studies, and user interviews demonstrate the effectiveness of AdaVis.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Songheng Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yong Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Haotian Li"},{"affiliations":"","email":"","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Songheng Zhang"],"doi":"10.1109/TVCG.2023.3316469","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization Recommendation, Logical Reasoning, Data Visualization, Knowledge Graph"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233316469","time_end":"","time_stamp":"","time_start":"","title":"AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'","uid":"v-tvcg-20233316469","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233322372":{"abstract":"Visualization linting is a proven effective tool in assisting users to follow established visualization guidelines. Despite its success, visualization linting for choropleth maps, one of the most popular visualizations on the internet, has yet to be investigated. In this paper, we present GeoLinter, a linting framework for choropleth maps that assists in creating accurate and robust maps. Based on a set of design guidelines and metrics drawing upon a collection of best practices from the cartographic literature, GeoLinter detects potentially suboptimal design decisions and provides further recommendations on design improvement with explanations at each step of the design process. We perform a validation study to evaluate the proposed framework's functionality with respect to identifying and fixing errors and apply its results to improve the robustness of GeoLinter. Finally, we demonstrate the effectiveness of the GeoLinter - validated through empirical studies - by applying it to a series of case studies using real-world datasets.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Fan Lei"},{"affiliations":"","email":"","is_corresponding":true,"name":"Arlen Fan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alan M. MacEachren"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ross Maciejewski"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arlen Fan"],"doi":"10.1109/TVCG.2023.3322372","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization , Image color analysis , Geology , Recommender systems , Guidelines , Bars , Visualization Author Keywords: Automated visualization design , choropleth maps , visualization linting , visualization recommendation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233322372","time_end":"","time_stamp":"","time_start":"","title":"GeoLinter: A Linting Framework for Choropleth Maps","uid":"v-tvcg-20233322372","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233322898":{"abstract":"Visual and interactive machine learning systems (IML) are becoming ubiquitous as they empower individuals with varied machine learning expertise to analyze data. However, it remains complex to align interactions with visual marks to a user\u2019s intent for steering machine learning models. We explore using data and visual design probes to elicit users\u2019 desired interactions to steer ML models via visual encodings within IML interfaces. We conducted an elicitation study with 20 data analysts with varying expertise in ML. We summarize our findings as pairs of target-interaction, which we compare to prior systems to assess the utility of the probes. We additionally surfaced insights about factors influencing how and why participants chose to interact with visual encodings, including refraining from interacting. Finally, we reflect on the value of gathering such formative empirical evidence via data and visual design probes ahead of developing IML prototypes. ","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Anamaria Crisan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Maddie Shang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Eric Brochu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anamaria Crisan"],"doi":"10.1109/TVCG.2023.3322898","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Design Probes, Interactive Machine Learning, Model Steering, Semantic Interaction"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233322898","time_end":"","time_stamp":"","time_start":"","title":"Eliciting Model Steering Interactions from Users via Data and Visual Design Probes","uid":"v-tvcg-20233322898","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233323150":{"abstract":"We examined user preferences to combine multiple interaction modalities for collaborative interaction with data shown on large vertical displays. Large vertical displays facilitate visual data exploration and allow the use of diverse interaction modalities by multiple users at different distances from the screen. Yet, how to offer multiple interaction modalities is a non-trivial problem. We conducted an elicitation study with 20 participants that generated 1015 interaction proposals combining touch, speech, pen, and mid-air gestures. Given the opportunity to interact using these four 13 modalities, participants preferred speech interaction in 10 of 15 14 low-level tasks and direct manipulation for straightforward tasks 15 such as showing a tooltip or selecting. In contrast to previous work, 16 participants most favored unimodal and personal interactions. We 17 identified what we call collaborative synonyms among their interaction proposals and found that pairs of users collaborated either unimodally and simultaneously or multimodally and sequentially. We provide insights into how end-users associate visual exploration tasks with certain modalities and how they collaborate at different interaction distances using specific interaction modalities. The supplemental material is available at https://osf.io/m8zuh.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Gabriela Molina Le\u00f3n"},{"affiliations":"","email":"","is_corresponding":false,"name":"Petra Isenberg"},{"affiliations":"","email":"","is_corresponding":false,"name":"Andreas Breiter"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gabriela Molina Le\u00f3n"],"doi":"10.1109/TVCG.2023.3323150","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Multimodal interaction, collaborative work, large vertical displays, elicitation study, spatio-temporal data"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233323150","time_end":"","time_stamp":"","time_start":"","title":"Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays","uid":"v-tvcg-20233323150","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233324851":{"abstract":"Dimensionality reduction (DR) algorithms are diverse and widely used for analyzing high-dimensional data. Various metrics and tools have been proposed to evaluate and interpret the DR results. However, most metrics and methods fail to be well generalized to measure any DR results from the perspective of original distribution fidelity or lack interactive exploration of DR results. There is still a need for more intuitive and quantitative analysis to interactively explore high-dimensional data and improve interpretability. We propose a metric and a generalized algorithm-agnostic approach based on the concept of capacity to evaluate and analyze the DR results. Based on our approach, we develop a visual analytic system HiLow for exploring high-dimensional data and projections. We also propose a mixed-initiative recommendation algorithm that assists users in interactively DR results manipulation. Users can compare the differences in data distribution after the interaction through HiLow. Furthermore, we propose a novel visualization design focusing on quantitative analysis of differences between high and low-dimensional data distributions. Finally, through user study and case studies, we validate the effectiveness of our approach and system in enhancing the interpretability of projections and analyzing the distribution of high and low-dimensional data.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Yang Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jisheng Liu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chufan Lai"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yuan Zhou"},{"affiliations":"","email":"","is_corresponding":true,"name":"Siming Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Siming Chen"],"doi":"10.1109/TVCG.2023.3324851","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233324851","time_end":"","time_stamp":"","time_start":"","title":"Interpreting High-Dimensional Projections With Capacity","uid":"v-tvcg-20233324851","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233326698":{"abstract":"Researchers have derived many theoretical models for specifying users\u2019 insights as they interact with a visualization system. These representations are essential for understanding the insight discovery process, such as when inferring user interaction patterns that lead to insight or assessing the rigor of reported insights. However, theoretical models can be difficult to apply to existing tools and user studies, often due to discrepancies in how insight and its constituent parts are defined. This paper calls attention to the consistent structures that recur across the visualization literature and describes how they connect multiple theoretical representations of insight. We synthesize a unified formalism for insights using these structures, enabling a wider audience of researchers and developers to adopt the corresponding models. Through a series of theoretical case studies, we use our formalism to compare and contrast existing theories, revealing interesting research challenges in reasoning about a user's domain knowledge and leveraging synergistic approaches in data mining and data management research.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Leilani Battle"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alvitta Ottley"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leilani Battle"],"doi":"10.1109/TVCG.2023.3326698","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233326698","time_end":"","time_stamp":"","time_start":"","title":"What Do We Mean When We Say \u201cInsight\u201d? A Formal Synthesis of Existing Theory","uid":"v-tvcg-20233326698","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233330262":{"abstract":"This paper presents a computational framework for the concise encoding of an ensemble of persistence diagrams, in the form of weighted Wasserstein barycenters [100], [102] of a dictionary of atom diagrams. We introduce a multi-scale gradient descent approach for the efficient resolution of the corresponding minimization problem, which interleaves the optimization of the barycenter weights with the optimization of the atom diagrams. Our approach leverages the analytic expressions for the gradient of both sub-problems to ensure fast iterations and it additionally exploits shared-memory parallelism. Extensive experiments on public ensembles demonstrate the efficiency of our approach, with Wasserstein dictionary computations in the orders of minutes for the largest examples. We show the utility of our contributions in two applications. First, we apply Wassserstein dictionaries to data reduction and reliably compress persistence diagrams by concisely representing them with their weights in the dictionary. Second, we present a dimensionality reduction framework based on a Wasserstein dictionary defined with a small number of atoms (typically three) and encode the dictionary as a low dimensional simplex embedded in a visual space (typically in 2D). In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Keanu Sisouk"},{"affiliations":"","email":"","is_corresponding":false,"name":"Julie Delon"},{"affiliations":"","email":"","is_corresponding":true,"name":"Julien Tierny"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Julien Tierny"],"doi":"10.1109/TVCG.2023.3330262","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Topological data analysis, ensemble data, persistence diagrams"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233330262","time_end":"","time_stamp":"","time_start":"","title":"Wasserstein Dictionaries of Persistence Diagrams","uid":"v-tvcg-20233330262","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233332511":{"abstract":"We present Submerse, an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Saeed Boorboor"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yoonsang Kim"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ping Hu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Josef Moses"},{"affiliations":"","email":"","is_corresponding":false,"name":"Brian Colle"},{"affiliations":"","email":"","is_corresponding":false,"name":"Arie E. Kaufman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Saeed Boorboor"],"doi":"10.1109/TVCG.2023.3332511","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Camera navigation, flooding simulation visualization, immersive visualization, mixed reality"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233332511","time_end":"","time_stamp":"","time_start":"","title":"Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies","uid":"v-tvcg-20233332511","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233332999":{"abstract":"Quantum computing offers significant speedup compared to classical computing, which has led to a growing interest among users in learning and applying quantum computing across various applications. However, quantum circuits, which are fundamental for implementing quantum algorithms, can be challenging for users to understand due to their underlying logic, such as the temporal evolution of quantum states and the effect of quantum amplitudes on the probability of basis quantum states. To fill this research gap, we propose QuantumEyes, an interactive visual analytics system to enhance the interpretability of quantum circuits through both global and local levels. For the global-level analysis, we present three coupled visualizations to delineate the changes of quantum states and the underlying reasons: a Probability Summary View to overview the probability evolution of quantum states; a State Evolution View to enable an in-depth analysis of the influence of quantum gates on the quantum states; a Gate Explanation View to show the individual qubit states and facilitate a better understanding of the effect of quantum gates. For the local-level analysis, we design a novel geometrical visualization dandelion chart to explicitly reveal how the quantum amplitudes affect the probability of the quantum state. We thoroughly evaluated QuantumEyes as well as the novel dandelion chart integrated into it through two case studies on different types of quantum algorithms and in-depth expert interviews with 12 domain experts. The results demonstrate the effectiveness and usability of our approach in enhancing the interpretability of quantum circuits.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Shaolun Ruan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Qiang Guan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Paul Griffin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ying Mao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yong Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shaolun Ruan"],"doi":"10.1109/TVCG.2023.3332999","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, design study, interpretability, quantum computing."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233332999","time_end":"","time_stamp":"","time_start":"","title":"QuantumEyes: Towards Better Interpretability of Quantum Circuits","uid":"v-tvcg-20233332999","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233333356":{"abstract":"As urban populations grow, effectively accessing urban performance measures such as livability and comfort becomes increasingly important due to their significant socioeconomic impacts. While Point of Interest (POI) data has been utilized for various applications in location-based services, its potential for urban performance analytics remains unexplored. In this paper, we present SenseMap, a novel approach for analyzing urban performance by leveraging POI data as a semantic representation of urban functions. We quantify the contribution of POIs to different urban performance measures by calculating semantic textual similarities on our constructed corpus. We propose Semantic-adaptive Kernel Density Estimation which takes into account POIs\u2019 in\ufb02uential areas across different Traf\ufb01c Analysis Zones and semantic contributions to generate semantic density maps for measures. We design and implement a feature-rich, real-time visual analytics system for users to explore the urban performance of their surroundings. Evaluations with human judgment and reference data demonstrate the feasibility and validity of our method. Usage scenarios and user studies demonstrate the capability, usability, and explainability of our system.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Juntong Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Qiaoyun Huang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Changbo Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chenhui Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Juntong Chen"],"doi":"10.1109/TVCG.2023.3333356","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Urban data, semantic textual similarity, point of interest, density map, visual analytics, visualization design"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233333356","time_end":"","time_stamp":"","time_start":"","title":"SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity","uid":"v-tvcg-20233333356","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233334513":{"abstract":"Data integration is often performed to consolidate information from multiple disparate data sources during visual data analysis. However, integration operations are usually separate from visual analytics operations such as encode and filter in both interface design and empirical research. We conducted a preliminary user study to investigate whether and how data integration should be incorporated directly into the visual analytics process. We used two interface alternatives featuring contrasting approaches to the data preparation and analysis workflow: manual file-based ex-situ integration as a separate step from visual analytics operations; and automatic UI-based in-situ integration merged with visual analytics operations. Participants were asked to complete specific and free-form tasks with each interface, browsing for patterns, generating insights, and summarizing relationships between attributes distributed across multiple files. Analyzing participants' interactions and feedback, we found both task completion time and total interactions to be similar across interfaces and tasks, as well as unique integration strategies between interfaces and emergent behaviors related to satisficing and cognitive bias. Participants' time spent and interactions revealed that in-situ integration enabled users to spend more time on analysis tasks compared with ex-situ integration. Participants' integration strategies and analytical behaviors revealed differences in interface usage for generating and tracking hypotheses and insights. With these results, we synthesized preliminary guidelines for designing future visual analytics interfaces that can support integrating attributes throughout an active analysis process.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Adam Coscia"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ashley Suh"},{"affiliations":"","email":"","is_corresponding":false,"name":"Remco Chang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alex Endert"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Adam Coscia"],"doi":"10.1109/TVCG.2023.3334513","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, Data integration, User interface design, Integration strategies, Analytical behaviors."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233334513","time_end":"","time_stamp":"","time_start":"","title":"Preliminary Guidelines For Combining Data Integration and Visual Data Analysis","uid":"v-tvcg-20233334513","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233334755":{"abstract":"This paper presents a computational framework for the Wasserstein auto-encoding of merge trees (MT-WAE), a novel extension of the classical auto-encoder neural network architecture to the Wasserstein metric space of merge trees. In contrast to traditional auto-encoders which operate on vectorized data, our formulation explicitly manipulates merge trees on their associated metric space at each layer of the network, resulting in superior accuracy and interpretability. Our novel neural network approach can be interpreted as a non-linear generalization of previous linear attempts [79] at merge tree encoding. It also trivially extends to persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our algorithms, with MT-WAE computations in the orders of minutes on average. We show the utility of our contributions in two applications adapted from previous work on merge tree encoding [79]. First, we apply MT-WAE to merge tree compression, by concisely representing them with their coordinates in the final layer of our auto-encoder. Second, we document an application to dimensionality reduction, by exploiting the latent space of our auto-encoder, for the visual analysis of ensemble data. We illustrate the versatility of our framework by introducing two penalty terms, to help preserve in the latent space both the Wasserstein distances between merge trees, as well as their clusters. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used for reproducibility.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Mathieu Pont"},{"affiliations":"","email":"","is_corresponding":true,"name":"Julien Tierny"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Julien Tierny"],"doi":"10.1109/TVCG.2023.3334755","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Topological data analysis, ensemble data, persistence diagrams, merge trees, auto-encoders, neural networks"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233334755","time_end":"","time_stamp":"","time_start":"","title":"Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)","uid":"v-tvcg-20233334755","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233336588":{"abstract":"This article explores how the ability to recall information in data visualizations depends on the presentation technology. Participants viewed 10 Isotype visualizations on a 2D screen, in 3D, in Virtual Reality (VR) and in Mixed Reality (MR). To provide a fair comparison between the three 3D conditions, we used LIDAR to capture the details of the physical rooms, and used this information to create our textured 3D models. For all environments, we measured the number of visualizations recalled and their order (2D) or spatial location (3D, VR, MR). We also measured the number of syntactic and semantic features recalled. Results of our study show increased recall and greater richness of data understanding in the MR condition. Not only did participants recall more visualizations and ordinal/spatial positions in MR, but they also remembered more details about graph axes and data mappings, and more information about the shape of the data. We discuss how differences in the spatial and kinesthetic cues provided in these different environments could contribute to these results, and reasons why we did not observe comparable performance in the 3D and VR conditions.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Christophe Hurter"},{"affiliations":"","email":"","is_corresponding":false,"name":"Bernice Rogowitz"},{"affiliations":"","email":"","is_corresponding":false,"name":"Guillaume Truong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tiffany Andry"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hugo Romat"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ludovic Gardy"},{"affiliations":"","email":"","is_corresponding":false,"name":"Fereshteh Amini"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nathalie Henry Riche"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Christophe Hurter"],"doi":"10.1109/TVCG.2023.3336588","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Three-dimensional displays, Virtual reality, Mixed reality, Electronic mail, Syntactics, Semantics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233336588","time_end":"","time_stamp":"","time_start":"","title":"Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D","uid":"v-tvcg-20233336588","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233337173":{"abstract":"Visualization design studies bring together visualization researchers and domain experts to address yet unsolved data analysis challenges stemming from the needs of the domain experts. Typically, the visualization researchers lead the design study process and implementation of any visualization solutions. This setup leverages the visualization researchers' knowledge of methodology, design, and programming, but the availability to synchronize with the domain experts can hamper the design process. We consider an alternative setup where the domain experts take the lead in the design study, supported by the visualization experts. In this study, the domain experts are computer architecture experts who simulate and analyze novel computer chip designs. These chips rely on a Network-on-Chip (NOC) to connect components. The experts want to understand how the chip designs perform and what in the design led to their performance. To aid this analysis, we develop Vis4Mesh, a visualization system that provides spatial, temporal, and architectural context to simulated NOC behavior. Integration with an existing computer architecture visualization tool enables architects to perform deep-dives into specific architecture component behavior. We validate Vis4Mesh through a case study and a user study with computer architecture researchers. We reflect on our design and process, discussing advantages, disadvantages, and guidance for engaging in a domain expert-led design studies.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Shaoyu Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hang Yan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Katherine E. Isaacs"},{"affiliations":"","email":"","is_corresponding":true,"name":"Yifan Sun"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yifan Sun"],"doi":"10.1109/TVCG.2023.3337173","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Design Study, Network-on-Chip, Performance Analysis"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233337173","time_end":"","time_stamp":"","time_start":"","title":"Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study","uid":"v-tvcg-20233337173","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233337396":{"abstract":"Partitioning a dynamic network into subsets (i.e., snapshots) based on disjoint time intervals is a widely used technique for understanding how structural patterns of the network evolve. However, selecting an appropriate time window (i.e., slicing a dynamic network into snapshots) is challenging and time-consuming, often involving a trial-and-error approach to investigating underlying structural patterns. To address this challenge, we present MoNetExplorer, a novel interactive visual analytics system that leverages temporal network motifs to provide recommendations for window sizes and support users in visually comparing different slicing results. MoNetExplorer provides a comprehensive analysis based on window size, including (1) a temporal overview to identify the structural information, (2) temporal network motif composition, and (3) node-link-diagram-based details to enable users to identify and understand structural patterns at various temporal resolutions. To demonstrate the effectiveness of our system, we conducted a case study with network researchers using two real-world dynamic network datasets. Our case studies show that the system effectively supports users to gain valuable insights into the temporal and structural aspects of dynamic networks.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Seokweon Jung"},{"affiliations":"","email":"","is_corresponding":false,"name":"DongHwa Shin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hyeon Jeon"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kiroong Choe"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Seokweon Jung"],"doi":"10.1109/TVCG.2023.3337396","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, Measurement, Size measurement, Windows, Time measurement, Data visualization, Task analysis, Visual analytics, Dynamic networks, Temporal network motifs, Interactive network slicing"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233337396","time_end":"","time_stamp":"","time_start":"","title":"A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs","uid":"v-tvcg-20233337396","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233337642":{"abstract":"Molecular docking is a key technique in various fields like structural biology, medicinal chemistry, and biotechnology. It is widely used for virtual screening during drug discovery, computer-assisted drug design, and protein engineering. A general molecular docking process consists of the target and ligand selection, their preparation, and the docking process itself, followed by the evaluation of the results. However, the most commonly used docking software provides no or very basic evaluation possibilities. Scripting and external molecular viewers are often used, which are not designed for an efficient analysis of docking results. Therefore, we developed InVADo, a comprehensive interactive visual analysis tool for large docking data. It consists of multiple linked 2D and 3D views. It filters and spatially clusters the data, and enriches it with post-docking analysis results of protein-ligand interactions and functional groups, to enable well-founded decision-making. In an exemplary case study, domain experts confirmed that InVADo facilitates and accelerates the analysis workflow. They rated it as a convenient, comprehensive, and feature-rich tool, especially useful for virtual screening.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Marco Sch\u00e4fer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nicolas Brich"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jan By\u0161ka"},{"affiliations":"","email":"","is_corresponding":false,"name":"S\u00e9rgio M. Marques"},{"affiliations":"","email":"","is_corresponding":false,"name":"David Bedn\u00e1\u0159"},{"affiliations":"","email":"","is_corresponding":false,"name":"Philipp Thiel"},{"affiliations":"","email":"","is_corresponding":false,"name":"Barbora Kozl\u00edkov\u00e1"},{"affiliations":"","email":"","is_corresponding":true,"name":"Michael Krone"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Krone"],"doi":"10.1109/TVCG.2023.3337642","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Molecular Docking, AutoDock, Virtual Screening, Visual Analysis, Visualization, Clustering, Protein-Ligand Interaction."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233337642","time_end":"","time_stamp":"","time_start":"","title":"InVADo: Interactive Visual Analysis of Molecular Docking Data","uid":"v-tvcg-20233337642","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233338451":{"abstract":"This paper investigates the role of text in visualizations, specifically the impact of text position, semantic content, and biased wording. Two empirical studies were conducted based on two tasks (predicting data trends and appraising bias) using two visualization types (bar and line charts). While the addition of text had a minimal effect on how people perceive data trends, there was a significant impact on how biased they perceive the authors to be. This finding revealed a relationship between the degree of bias in textual information and the perception of the authors' bias. Exploratory analyses support an interaction between a person's prediction and the degree of bias they perceived. This paper also develops a crowdsourced method for creating chart annotations that range from neutral to highly biased. This research highlights the need for designers to mitigate potential polarization of readers' opinions based on how authors' ideas are expressed.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Chase Stokes"},{"affiliations":"","email":"","is_corresponding":false,"name":"Cindy Xiong Bearfield"},{"affiliations":"","email":"","is_corresponding":false,"name":"Marti Hearst"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chase Stokes"],"doi":"10.1109/TVCG.2023.3338451","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, text, annotation, perceived bias, judgment, prediction"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233338451","time_end":"","time_stamp":"","time_start":"","title":"The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions","uid":"v-tvcg-20233338451","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233340770":{"abstract":"We present VoxAR, a method to facilitate an effective visualization of volume-rendered objects in optical see-through head-mounted displays (OST-HMDs). The potential of augmented reality (AR) to integrate digital information into the physical world provides new opportunities for visualizing and interpreting scientific data. However, a limitation of OST-HMD technology is that rendered pixels of a virtual object can interfere with the colors of the real-world, making it challenging to perceive the augmented virtual information accurately. We address this challenge in a two-step approach. First, VoxAR determines an appropriate placement of the volume-rendered object in the real-world scene by evaluating a set of spatial and environmental objectives, managed as user-selected preferences and pre-defined constraints. We achieve a real-time solution by implementing the objectives using a GPU shader language. Next, VoxAR adjusts the colors of the input transfer function (TF) based on the real-world placement region. Specifically, we introduce a novel optimization method that adjusts the TF colors such that the resulting volume-rendered pixels are discernible against the background and the TF maintains the perceptual mapping between the colors and data intensity values. Finally, we present an assessment of our approach through objective evaluations and subjective user studies.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Saeed Boorboor"},{"affiliations":"","email":"","is_corresponding":false,"name":"Matthew S. Castellana"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yoonsang Kim"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zhutian Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Johanna Beyer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hanspeter Pfister"},{"affiliations":"","email":"","is_corresponding":false,"name":"Arie E. Kaufman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Saeed Boorboor"],"doi":"10.1109/TVCG.2023.3340770","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Adaptive Visualization, Situated Visualization, Augmented Reality, Volume Rendering"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233340770","time_end":"","time_stamp":"","time_start":"","title":"VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality","uid":"v-tvcg-20233340770","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233341990":{"abstract":"We report on challenges and considerations for supporting design processes for visualizations in motion embedded in sports videos. We derive our insights from analyzing swimming race visualizations and motion-related data, building a technology probe, as well as a study with designers. Understanding how to design situated visualizations in motion is important for a variety of contexts. Competitive sports coverage, in particular, increasingly includes information on athlete or team statistics and records. Although moving visual representations attached to athletes or other targets are starting to appear, systematic investigations on how to best support their design process in the context of sports videos are still missing. Our work makes several contributions in identifying opportunities for visualizations to be added to swimming competition coverage but, most importantly, in identifying requirements and challenges for designing situated visualizations in motion. Our investigations include the analysis of a survey with swimming enthusiasts on their motion-related information needs, an ideation workshop to collect designs and elicit design challenges, the design of a technology probe that allows to create embedded visualizations in motion based on real data, and an evaluation with visualization designers that aimed to understand the benefits of designing directly on videos.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Lijie Yao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Romain Vuillemot"},{"affiliations":"","email":"","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":"","email":"","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lijie Yao"],"doi":"10.1109/TVCG.2023.3341990","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Sports, Videos, Probes, Surveys, Authoring systems, Games, Design framework, Embedded visualization, Sports analytics, Visualization in motion"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233341990","time_end":"","time_stamp":"","time_start":"","title":"Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos","uid":"v-tvcg-20233341990","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233345340":{"abstract":"Label quality issues, such as noisy labels and imbalanced class distributions, have negative effects on model performance. Automatic reweighting methods identify problematic samples with label quality issues by recognizing their negative effects on validation samples and assigning lower weights to them. However, these methods fail to achieve satisfactory performance when the validation samples are of low quality. To tackle this, we develop Reweighter, a visual analysis tool for sample reweighting. The reweighting relationships between validation samples and training samples are modeled as a bipartite graph. Based on this graph, a validation sample improvement method is developed to improve the quality of validation samples. Since the automatic improvement may not always be perfect, a co-cluster-based bipartite graph visualization is developed to illustrate the reweighting relationships and support the interactive adjustments to validation samples and reweighting results. The adjustments are converted into the constraints of the validation sample improvement method to further improve validation samples. We demonstrate the effectiveness of Reweighter in improving reweighting results through quantitative evaluation and two case studies.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Weikai Yang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yukai Guo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jing Wu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zheng Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lan-Zhe Guo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yu-Feng Li"},{"affiliations":"","email":"","is_corresponding":false,"name":"Shixia Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Weikai Yang"],"doi":"10.1109/TVCG.2023.3345340","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233345340","time_end":"","time_stamp":"","time_start":"","title":"Interactive Reweighting for Mitigating Label Quality Issues","uid":"v-tvcg-20233345340","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233345373":{"abstract":"Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for compressing large-scale time-varying data. The approach consists of two stages: spatial compression and model aggregation. In the first stage, each time step is compressed using an implicit neural representation with bottleneck layers and features of interest preservation-based sampling. In the second stage, we utilize an offline knowledge distillation algorithm to extract knowledge from the trained models and aggregate it into a single model. We evaluated our approach on a variety of time-varying volumetric data sets. Both quantitative and qualitative results, such as PSNR, LPIPS, and rendered images, demonstrate that KD-INR surpasses the state-of-the-art approaches, including learning-based (i.e., CoordNet, NeurComp, and SIREN) and lossy compression (i.e., SZ3, ZFP, and TTHRESH) methods, at various compression ratios ranging from hundreds to ten thousand.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Jun Han"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hao Zheng"},{"affiliations":"","email":"","is_corresponding":false,"name":"Change Bi"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Han Jun"],"doi":"10.1109/TVCG.2023.3345373","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Time-varying data compression, implicit neural representation, knowledge distillation, volume visualization."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233345373","time_end":"","time_stamp":"","time_start":"","title":"KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation","uid":"v-tvcg-20233345373","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233346640":{"abstract":"Is it true that if citizens understand hurricane probabilities, they will make more rational decisions for evacuation? Finding answers to such questions is not straightforward in the literature because the terms \u201c judgment \u201d and \u201c decision making \u201d are often used interchangeably. This terminology conflation leads to a lack of clarity on whether people make suboptimal decisions because of inaccurate judgments of information conveyed in visualizations or because they use alternative yet currently unknown heuristics. To decouple judgment from decision making, we review relevant concepts from the literature and present two preregistered experiments (N=601) to investigate if the task (judgment vs. decision making), the scenario (sports vs. humanitarian), and the visualization (quantile dotplots, density plots, probability bars) affect accuracy. While experiment 1 was inconclusive, we found evidence for a difference in experiment 2. Contrary to our expectations and previous research, which found decisions less accurate than their direct-equivalent judgments, our results pointed in the opposite direction. Our findings further revealed that decisions were less vulnerable to status-quo bias, suggesting decision makers may disfavor responses associated with inaction. We also found that both scenario and visualization types can influence people's judgments and decisions. Although effect sizes are not large and results should be interpreted carefully, we conclude that judgments cannot be safely used as proxy tasks for decision making, and discuss implications for visualization research and beyond. Materials and preregistrations are available at https://osf.io/ufzp5/?view_only=adc0f78a23804c31bf7fdd9385cb264f.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Ba\u015fak Oral"},{"affiliations":"","email":"","is_corresponding":false,"name":"Pierre Dragicevic"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alexandru Telea"},{"affiliations":"","email":"","is_corresponding":false,"name":"Evanthia Dimara"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ba\u015fak Oral"],"doi":"10.1109/TVCG.2023.3346640","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Task analysis, Decision making, Visualization, Bars, Sports, Terminology, Cognition, Decision Making, Judgment, Psychology, Visualization"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233346640","time_end":"","time_stamp":"","time_start":"","title":"Decoupling Judgment and Decision Making: A Tale of Two Tails","uid":"v-tvcg-20233346640","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233346641":{"abstract":"Currently, growing data sources and long-running algorithms impede user attention and interaction with visual analytics applications. Progressive visualization (PV) and visual analytics (PVA) alleviate this problem by allowing immediate feedback and interaction with large datasets and complex computations, avoiding waiting for complete results by using partial results improving with time. Yet, creating a progressive visualization requires more effort than a regular visualization but also opens up new possibilities, such as steering the computations towards more relevant parts of the data, thus saving computational resources. However, there is currently no comprehensive overview of the design space for progressive visualization systems. We surveyed the related work of PV and derived a new taxonomy for progressive visualizations by systematically categorizing all PV publications that included visualizations with progressive features. Progressive visualizations can be categorized by well-known visualization taxonomies, but we also found that progressive visualizations can be distinguished by the way they manage their data processing, data domain, and visual update. Furthermore, we identified key properties such as uncertainty, steering, visual stability, and real-time processing that are significantly different with progressive applications. We also collected evaluation methodologies reported by the publications and conclude with statistical findings, research gaps, and open challenges. A continuously updated visual browser of the survey data is available at visualsurvey.net/pva.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Alex Ulmer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Marco Angelini"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jean-Daniel Fekete"},{"affiliations":"","email":"","is_corresponding":false,"name":"J\u00f6rn Kohlhammerm"},{"affiliations":"","email":"","is_corresponding":false,"name":"Thorsten May"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alex Ulmer"],"doi":"10.1109/TVCG.2023.3346641","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Convergence, Visual analytics, Taxonomy Surveys, Rendering (computer graphics), Task analysis, Progressive Visual Analytics, Progressive Visualization, Taxonomy, State-of-the-Art Report, Survey"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233346641","time_end":"","time_stamp":"","time_start":"","title":"A Survey on Progressive Visualization","uid":"v-tvcg-20233346641","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20233346713":{"abstract":"Recent growth in the popularity of large language models has led to their increased usage for summarizing, predicting, and generating text, making it vital to help researchers and engineers understand how and why they work. We present KnowledgeVIS , a human-in-the-loop visual analytics system for interpreting language models using fill-in-the-blank sentences as prompts. By comparing predictions between sentences, KnowledgeVIS reveals learned associations that intuitively connect what language models learn during training to natural language tasks downstream, helping users create and test multiple prompt variations, analyze predicted words using a novel semantic clustering technique, and discover insights using interactive visualizations. Collectively, these visualizations help users identify the likelihood and uniqueness of individual predictions, compare sets of predictions between prompts, and summarize patterns and relationships between predictions across all prompts. We demonstrate the capabilities of KnowledgeVIS with feedback from six NLP experts as well as three different use cases: (1) probing biomedical knowledge in two domain-adapted models; and (2) evaluating harmful identity stereotypes and (3) discovering facts and relationships between three general-purpose models.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Adam Coscia"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alex Endert"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Adam Coscia"],"doi":"10.1109/TVCG.2023.3346713","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, language models, prompting, interpretability, machine learning."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233346713","time_end":"","time_stamp":"","time_start":"","title":"KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts","uid":"v-tvcg-20233346713","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243350076":{"abstract":"Ensembles of contours arise in various applications like simulation, computer-aided design, and semantic segmentation. Uncovering ensemble patterns and analyzing individual members is a challenging task that suffers from clutter. Ensemble statistical summarization can alleviate this issue by permitting analyzing ensembles' distributional components like the mean and median, confidence intervals, and outliers. Contour boxplots, powered by Contour Band Depth (CBD), are a popular non-parametric ensemble summarization method that benefits from CBD's generality, robustness, and theoretical properties. In this work, we introduce Inclusion Depth (ID), a new notion of contour depth with three defining characteristics. First, ID is a generalization of functional Half-Region Depth, which offers several theoretical guarantees. Second, ID relies on a simple principle: the inside/outside relationships between contours. This facilitates implementing ID and understanding its results. Third, the computational complexity of ID scales quadratically in the number of members of the ensemble, improving CBD's cubic complexity. This also in practice speeds up the computation enabling the use of ID for exploring large contour ensembles or in contexts requiring multiple depth evaluations like clustering. In a series of experiments on synthetic data and case studies with meteorological and segmentation data, we evaluate ID's performance and demonstrate its capabilities for the visual analysis of contour ensembles.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Nicolas F. Chaves-de-Plaza"},{"affiliations":"","email":"","is_corresponding":false,"name":"Prerak Mody"},{"affiliations":"","email":"","is_corresponding":false,"name":"Marius Staring"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ren\u00e9 van Egmond"},{"affiliations":"","email":"","is_corresponding":false,"name":"Anna Vilanova"},{"affiliations":"","email":"","is_corresponding":false,"name":"Klaus Hildebrandt"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicol\u00e1s Ch\u00e1ves"],"doi":"10.1109/TVCG.2024.3350076","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Uncertainty visualization, contours, ensemble summarization, depth statistics."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243350076","time_end":"","time_stamp":"","time_start":"","title":"Inclusion Depth for Contour Ensembles","uid":"v-tvcg-20243350076","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243354561":{"abstract":"Interactive visualization can support fluid exploration but is often limited to predetermined tasks. Scripting can support a vast range of queries but may be more cumbersome for free-form exploration. Embedding interactive visualization in scripting environments, such as computational notebooks, provides an opportunity to leverage the strengths of both direct manipulation and scripting. We investigate interactive visualization design methodology, choices, and strategies under this paradigm through a design study of calling context trees used in performance analysis, a field which exemplifies typical exploratory data analysis workflows with big data and hard to define problems. We first produce a formal task analysis assigning tasks to graphical or scripting contexts based on their specificity, frequency, and suitability. We then design a notebook-embedded interactive visualization and validate it with intended users. In a follow-up study, we present participants with multiple graphical and scripting interaction modes to elicit feedback about notebook-embedded visualization design, finding consensus in support of the interaction model. We report and reflect on observations regarding the process and design implications for combining visualization and scripting in notebooks.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Connor Scully-Allison"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ian Lumsden"},{"affiliations":"","email":"","is_corresponding":false,"name":"Katy Williams"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jesse Bartels"},{"affiliations":"","email":"","is_corresponding":false,"name":"Michela Taufer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Stephanie Brink"},{"affiliations":"","email":"","is_corresponding":false,"name":"Abhinav Bhatele"},{"affiliations":"","email":"","is_corresponding":false,"name":"Olga Pearce"},{"affiliations":"","email":"","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Connor Scully-Allison"],"doi":"10.1109/TVCG.2024.3354561","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Exploratory Data Analysis, Interactive Data Analysis, Computational Notebooks, Hybrid Visualization-Scripting, Visualization Design"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243354561","time_end":"","time_stamp":"","time_start":"","title":"Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments","uid":"v-tvcg-20243354561","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243355884":{"abstract":"News articles containing data visualizations play an important role in informing the public on issues ranging from public health to politics. Recent research on the persuasive appeal of data visualizations suggests that prior attitudes can be notoriously difficult to change. Inspired by an NYT article, we designed two experiments to evaluate the impact of elicitation and contrasting narratives on attitude change, recall, and engagement. We hypothesized that eliciting prior beliefs leads to more elaborative thinking that ultimately results in higher attitude change, better recall, and engagement. Our findings revealed that visual elicitation leads to higher engagement in terms of feelings of surprise. While there is an overall attitude change across all experiment conditions, we did not observe a significant effect of belief elicitation on attitude change. With regard to recall error, while participants in the draw trend elicitation exhibited significantly lower recall error than participants in the categorize trend condition, we found no significant difference in recall error when comparing elicitation conditions to no elicitation. In a follow-up study, we added contrasting narratives with the purpose of making the main visualization (communicating data on the focal issue) appear strikingly different. Compared to the results of Study 1, we found that contrasting narratives improved engagement in terms of surprise and interest but interestingly resulted in higher recall error and no significant change in attitude. We discuss the effects of elicitation and contrasting narratives in the context of topic involvement and the strengths of temporal trends encoded in the data visualization.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Milad Rogha"},{"affiliations":"","email":"","is_corresponding":false,"name":"Subham Sah"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alireza Karduni"},{"affiliations":"","email":"","is_corresponding":false,"name":"Douglas Markant"},{"affiliations":"","email":"","is_corresponding":false,"name":"Wenwen Dou"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Milad Rogha"],"doi":"10.1109/TVCG.2024.3355884","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Market Research, Visualization, Uncertainty, Data Models, Correlation, Attitude Control, Belief Elicitation, Visual Elicitation, Data Visualization, Contrasting Narratives"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243355884","time_end":"","time_stamp":"","time_start":"","title":"The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization","uid":"v-tvcg-20243355884","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243356566":{"abstract":"The increasing ubiquity of data in everyday life has elevated the importance of data literacy and accessible data representations, particularly for individuals with disabilities. While prior research predominantly focuses on the needs of the visually impaired, our survey aims to broaden this scope by investigating accessible data representations across a more inclusive spectrum of disabilities. After conducting a systematic review of 152 accessible data representation papers from ACM and IEEE databases, we found that roughly 78% of existing articles center on vision impairments. In this paper, we conduct a comprehensive review of the remaining 22% of papers focused on underrepresented disability communities. We developed categorical dimensions based on accessibility, visualization, and human-computer interaction to classify the papers. These dimensions include the community of focus, issues addressed, contribution type, study methods, participants, data type, visualization type, and data domain. Our work redefines accessible data representations by illustrating their application for disabilities beyond those related to vision. Building on our literature review, we identify and discuss opportunities for future research in accessible data representations. All supplemental materials are available at https://osf.io/ yv4xm/?view only=7b36a3fbf7a14b3888029966faa3def9.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Brianna L. Wimer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Laura South"},{"affiliations":"","email":"","is_corresponding":false,"name":"Keke Wu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Danielle Albers Szafir"},{"affiliations":"","email":"","is_corresponding":false,"name":"Michelle A. Borkin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ronald A. Metoyer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Brianna Wimer"],"doi":"10.1109/TVCG.2024.3356566","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Accessibility, Data Representations."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243356566","time_end":"","time_stamp":"","time_start":"","title":"Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations","uid":"v-tvcg-20243356566","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243358919":{"abstract":"We conduct two in-lab experiments (N=93) to evaluate the effectiveness of Gantt charts, extended Gantt charts, and stringline charts for visualizing fixed-order event sequence data. We first formulate five types of event sequences and define three types of sequence elements: point events, interval events, and the temporal gaps between them. Our two experiments focus on event sequences with a pre-defined, fixed order, and measure task error rates and completion time. The first experiment shows single sequences and assesses the three charts' performance in comparing event duration or gap. The second experiment shows multiple sequences and evaluates how well the charts reveal temporal patterns. The results suggest that when visualizing single fixed-order event sequences, 1) Gantt and extended Gantt charts lead to comparable error rates in the duration-comparing task; 2) Gantt charts exhibit either shorter or equal completion time than extended Gantt charts; 3) both Gantt and extended Gantt charts demonstrate shorter completion times than stringline charts; 4) however, stringline charts outperform the other two charts with fewer errors in the comparing task when event type counts are high. Additionally, when visualizing multiple point-based fixed-order event sequences, stringline charts require less time than Gantt charts for people to find temporal patterns. Based on these findings, we discuss design opportunities for visualizing fixed-order event sequences and discuss future avenues for optimizing these charts.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Junxiu Tang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Fumeng Yang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiang Wu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yifang Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiayi Zhou"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xiwen Cai"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lingyun Yu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Junxiu Tang"],"doi":"10.1109/TVCG.2024.3358919","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Gantt chart, stringline chart, Marey's graph, event sequence, empirical study"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243358919","time_end":"","time_stamp":"","time_start":"","title":"A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts","uid":"v-tvcg-20243358919","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243364388":{"abstract":"Seasonal-trend decomposition based on loess (STL) is a powerful tool to explore time series data visually. In this paper, we present an extension of STL to uncertain data, named uncertainty-aware STL (UASTL). Our method propagates multivariate Gaussian distributions mathematically exactly through the entire analysis and visualization pipeline. Thereby, stochastic quantities shared between the components of the decomposition are preserved. Moreover, we present application scenarios with uncertainty modeling based on Gaussian processes, e.g., data with uncertain areas or missing values. Besides these mathematical results and modeling aspects, we introduce visualization techniques that address the challenges of uncertainty visualization and the problem of visualizing highly correlated components of a decomposition. The global uncertainty propagation enables the time series visualization with STL-consistent samples, the exploration of correlation between and within decomposition's components, and the analysis of the impact of varying uncertainty. Finally, we show the usefulness of UASTL and the importance of uncertainty visualization with several examples. Thereby, a comparison with conventional STL is performed.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Tim Krake"},{"affiliations":"","email":"","is_corresponding":false,"name":"Daniel Kl\u00f6tzl"},{"affiliations":"","email":"","is_corresponding":false,"name":"David H\u00e4gele"},{"affiliations":"","email":"","is_corresponding":false,"name":"Daniel Weiskopf"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tim Krake"],"doi":"10.1109/TVCG.2024.3364388","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["- I.6.9.g Visualization techniques and methodologies < I.6.9 Visualization < I.6 Simulation, Modeling, and Visualization < I Compu - G.3 Probability and Statistics < G Mathematics of Computing - G.3.n Statistical computing < G.3 Probability and Statistics < G Mathematics of Computing - G.3.p Stochastic processes < G.3 Probability and Statistics < G Mathematics of Computing"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243364388","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess","uid":"v-tvcg-20243364388","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243364841":{"abstract":"The need to understand the structure of hierarchical or high-dimensional data is present in a variety of fields. Hyperbolic spaces have proven to be an important tool for embedding computations and analysis tasks as their non-linear nature lends itself well to tree or graph data. Subsequently, they have also been used in the visualization of high-dimensional data, where they exhibit increased embedding performance. However, none of the existing dimensionality reduction methods for embedding into hyperbolic spaces scale well with the size of the input data. That is because the embeddings are computed via iterative optimization schemes and the computation cost of every iteration is quadratic in the size of the input. Furthermore, due to the non-linear nature of hyperbolic spaces, Euclidean acceleration structures cannot directly be translated to the hyperbolic setting. This paper introduces the first acceleration structure for hyperbolic embeddings, building upon a polar quadtree. We compare our approach with existing methods and demonstrate that it computes embeddings of similar quality in significantly less time. Implementation and scripts for the experiments can be found at this https URL.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Martin Skrodzki"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hunter van Geffen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nicolas F. Chaves-de-Plaza"},{"affiliations":"","email":"","is_corresponding":false,"name":"Thomas H\u00f6llt"},{"affiliations":"","email":"","is_corresponding":false,"name":"Elmar Eisemann"},{"affiliations":"","email":"","is_corresponding":false,"name":"Klaus Hildebrandt"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Martin Skrodzki"],"doi":"10.1109/TVCG.2024.3364841","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Quantitative Methods (q-bio.QM); Machine Learning (stat.ML) Dimensionality reduction, t-SNE, hyperbolic embedding, acceleration structure"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243364841","time_end":"","time_stamp":"","time_start":"","title":"Accelerating hyperbolic t-SNE","uid":"v-tvcg-20243364841","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243365089":{"abstract":"Implicit Neural representations (INRs) are widely used for scientific data reduction and visualization by modeling the function that maps a spatial location to a data value. Without any prior knowledge about the spatial distribution of values, we are forced to sample densely from INRs to perform visualization tasks like iso-surface extraction which can be very computationally expensive. Recently, range analysis has shown promising results in improving the efficiency of geometric queries, such as ray casting and hierarchical mesh extraction, on INRs for 3D geometries by using arithmetic rules to bound the output range of the network within a spatial region. However, the analysis bounds are often too conservative for complex scientific data. In this paper, we present an improved technique for range analysis by revisiting the arithmetic rules and analyzing the probability distribution of the network output within a spatial region. We model this distribution efficiently as a Gaussian distribution by applying the central limit theorem. Excluding low probability values, we are able to tighten the output bounds, resulting in a more accurate estimation of the value range, and hence more accurate identification of iso-surface cells and more efficient iso-surface extraction on INRs. Our approach demonstrates superior performance in terms of the iso-surface extraction time on four datasets compared to the original range analysis method and can also be generalized to other geometric query tasks.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Haoyu Li"},{"affiliations":"","email":"","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haoyu Li"],"doi":"10.1109/TVCG.2024.3365089","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Iso-surface extraction, implicit neural representation, uncertainty propagation, affine arithmetic."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243365089","time_end":"","time_stamp":"","time_start":"","title":"Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation","uid":"v-tvcg-20243365089","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243368060":{"abstract":"Visual analytics supports data analysis tasks within complex domain problems. However, due to the richness of data types, visual designs, and interaction designs, users need to recall and process a significant amount of information when they visually analyze data. These challenges emphasize the need for more intelligent visual analytics methods. Large language models have demonstrated the ability to interpret various forms of textual data, offering the potential to facilitate intelligent support for visual analytics. We propose LEVA, a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. To support onboarding, we use large language models to interpret visualization designs and view relationships based on system specifications. For exploration, we use large language models to recommend insights based on the analysis of system status and data to facilitate mixed-initiative exploration. For summarization, we present a selective reporting strategy to retrace analysis history through a stream visualization and generate insight reports with the help of large language models. We demonstrate how LEVA can be integrated into existing visual analytics systems. Two usage scenarios and a user study suggest that LEVA effectively aids users in conducting visual analytics.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yuheng Zhao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yixing Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yu Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xinyi Zhao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Junjie Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zekai Shao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Cagatay Turkay"},{"affiliations":"","email":"","is_corresponding":false,"name":"Siming Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuheng Zhao"],"doi":"10.1109/TVCG.2024.3368060","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Insight recommendation, mixed-initiative, interface agent, large language models, visual analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243368060","time_end":"","time_stamp":"","time_start":"","title":"LEVA: Using Large Language Models to Enhance Visual Analytics","uid":"v-tvcg-20243368060","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243368621":{"abstract":"The use of natural language interfaces (NLIs) to create charts is becoming increasingly popular due to the intuitiveness of natural language interactions. One key challenge in this approach is to accurately capture user intents and transform them to proper chart specifications. This obstructs the wide use of NLI in chart generation, as users' natural language inputs are generally abstract (i.e., ambiguous or under-specified), without a clear specification of visual encodings. Recently, pre-trained large language models (LLMs) have exhibited superior performance in understanding and generating natural language, demonstrating great potential for downstream tasks. Inspired by this major trend, we propose ChartGPT, generating charts from abstract natural language inputs. However, LLMs are struggling to address complex logic problems. To enable the model to accurately specify the complex parameters and perform operations in chart generation, we decompose the generation process into a step-by-step reasoning pipeline, so that the model only needs to reason a single and specific sub-task during each run. Moreover, LLMs are pre-trained on general datasets, which might be biased for the task of chart generation. To provide adequate visualization knowledge, we create a dataset consisting of abstract utterances and charts and improve model performance through fine-tuning. We further design an interactive interface for ChartGPT that allows users to check and modify the intermediate outputs of each step. The effectiveness of the proposed system is evaluated through quantitative evaluations and a user study.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yuan Tian"},{"affiliations":"","email":"","is_corresponding":false,"name":"Weiwei Cui"},{"affiliations":"","email":"","is_corresponding":false,"name":"Dazhen Deng"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xinjing Yi"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yurun Yang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Haidong Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuan Tian"],"doi":"10.1109/TVCG.2024.3368621","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Natural language interfaces, large language models, data visualization"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243368621","time_end":"","time_stamp":"","time_start":"","title":"ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language","uid":"v-tvcg-20243368621","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243372104":{"abstract":"With the rise of short-form video platforms and the increasing availability of data, we see the potential for people to share short-form videos embedded with data in situ (e.g., daily steps when running) to increase the credibility and expressiveness of their stories. However, creating and sharing such videos in situ is challenging since it involves multiple steps and skills (e.g., data visualization creation and video editing), especially for amateurs. By conducting a formative study (N=10) using three design probes, we collected the motivations and design requirements. We then built VisTellAR, a mobile AR authoring tool, to help amateur video creators embed data visualizations in short-form videos in situ. A two-day user study shows that participants (N=12) successfully created various videos with data visualizations in situ and they confirmed the ease of use and learning. AR pre-stage authoring was useful to assist people in setting up data visualizations in reality with more designs in camera movements and interaction with gestures and physical objects to storytelling.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Wai Tong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kento Shigyo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lin-Ping Yuan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Mingming Fan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ting-Chuen Pong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Meng Xia"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Wai Tong"],"doi":"10.1109/TVCG.2024.3372104","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Personal data, augmented reality, data visualization, storytelling, short-form video"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243372104","time_end":"","time_stamp":"","time_start":"","title":"VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality","uid":"v-tvcg-20243372104","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243372620":{"abstract":"Small multiples are a popular visualization method, displaying different views of a dataset using multiple frames, often with the same scale and axes. However, there is a need to address their potential constraints, especially in the context of human cognitive capacity limits. These limits dictate the maximum information our mind can process at once. We explore the issue of capacity limitation by testing competing theories that describe how the number of frames shown in a display, the scale of the frames, and time constraints impact user performance with small multiples of line charts in an energy grid scenario. In two online studies (Experiment 1 n = 141 and Experiment 2 n = 360) and a follow-up eye-tracking analysis (n=5),we found a linear decline in accuracy with increasing frames across seven tasks, which was not fully explained by differences in frame size, suggesting visual search challenges. Moreover, the studies demonstrate that highlighting specific frames can mitigate some visual search difficulties but, surprisingly, not eliminate them. This research offers insights into optimizing the utility of small multiples by aligning them with human limitations.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Helia Hosseinpour"},{"affiliations":"","email":"","is_corresponding":false,"name":"Laura E. Matzen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kristin M. Divis"},{"affiliations":"","email":"","is_corresponding":false,"name":"Spencer C. Castro"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lace Padilla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Helia Hosseinpour"],"doi":"10.1109/TVCG.2024.3372620","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Cognition, small multiples, time-series data"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243372620","time_end":"","time_stamp":"","time_start":"","title":"Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs","uid":"v-tvcg-20243372620","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243374571":{"abstract":"Visualization Recommendation Systems (VRSs) are a novel and challenging field of study aiming to help generate insightful visualizations from data and support non-expert users in information discovery. Among the many contributions proposed in this area, some systems embrace the ambitious objective of imitating human analysts to identify relevant relationships in data and make appropriate design choices to represent these relationships with insightful charts. We denote these systems as \"agnostic\" VRSs since they do not rely on human-provided constraints and rules but try to learn the task autonomously. Despite the high application potential of agnostic VRSs, their progress is hindered by several obstacles, including the absence of standardized datasets to train recommendation algorithms, the difficulty of learning design rules, and defining quantitative criteria for evaluating the perceptual effectiveness of generated plots. This paper summarizes the literature on agnostic VRSs and outlines promising future research directions.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Luca Podo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Bardh Prenkaj"},{"affiliations":"","email":"","is_corresponding":false,"name":"Paola Velardi"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Luca Podo"],"doi":"10.1109/TVCG.2024.3374571","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243374571","time_end":"","time_stamp":"","time_start":"","title":"Agnostic Visual Recommendation Systems: Open Challenges and Future Directions","uid":"v-tvcg-20243374571","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243376406":{"abstract":"Visualizing event timelines for collaborative text writing is an important application for navigating and understanding such data, as time passes and the size and complexity of both text and timeline increase. They are often employed by applications such as code repositories and collaborative text editors. In this paper, we present a visualization tool to explore historical records of writing of legislative texts, which were discussed and voted on by an assembly of representatives. Our visualization focuses on event timelines from text documents that involve multiple people and different topics, allowing for observation of different proposed versions of said text or tracking data provenance of given text sections, while highlighting the connections between all elements involved. We also describe the process of designing such a tool alongside domain experts, with three steps of evaluation being conducted to verify the effectiveness of our design.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Gabriel D. Cantareira"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yiwen Xing"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nicholas Cole"},{"affiliations":"","email":"","is_corresponding":false,"name":"Rita Borgo"},{"affiliations":"","email":"","is_corresponding":true,"name":"Alfie Abdul-Rahman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alfie Abdul-Rahman"],"doi":"10.1109/TVCG.2024.3376406","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Collaboration, History, Humanities, Writing, Navigation, Metadata"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243376406","time_end":"","time_stamp":"","time_start":"","title":"Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records","uid":"v-tvcg-20243376406","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243381453":{"abstract":"Scatterplots provide a visual representation of bivariate data (or 2D embeddings of multivariate data) that allows for effective analyses of data dependencies, clusters, trends, and outliers. Unfortunately, classical scatterplots suffer from scalability issues, since growing data sizes eventually lead to overplotting and visual clutter on a screen with a fixed resolution, which hinders the data analysis process. We propose an algorithm that compensates for irregular sample distributions by a smooth transformation of the scatterplot's visual domain. Our algorithm evaluates the scatterplot's density distribution to compute a regularization mapping based on integral images of the rasterized density function. The mapping preserves the samples' neighborhood relations. Few regularization iterations suffice to achieve a nearly uniform sample distribution that efficiently uses the available screen space. We further propose approaches to visually convey the transformation that was applied to the scatterplot and compare them in a user study. We present a novel parallel algorithm for fast GPU-based integral-image computation, which allows for integrating our de-cluttering approach into interactive visual data analysis systems.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Hennes Rave"},{"affiliations":"","email":"","is_corresponding":false,"name":"Vladimir Molchanov"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lars Linsen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hennes Rave"],"doi":"10.1109/TVCG.2024.3381453","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243381453","time_end":"","time_stamp":"","time_start":"","title":"De-cluttering Scatterplots with Integral Images","uid":"v-tvcg-20243381453","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243382607":{"abstract":"Advanced manufacturing creates increasingly complex objects with material compositions that are often difficult to characterize by a single modality. Our domain scientists are going beyond traditional methods by employing both X-ray and neutron computed tomography to obtain complementary representations expected to better resolve material boundaries. However, the use of two modalities creates its own challenges for visualization, requiring either complex adjustments of multimodal transfer functions or the need for multiple views. Together with experts in nondestructive evaluation, we designed a novel interactive multimodal visualization approach to create a combined view of the co-registered X-ray and neutron acquisitions of industrial objects. Using an automatic topological segmentation of the bivariate histogram of X-ray and neutron values as a starting point, the system provides a simple yet effective interface to easily create, explore, and adjust a multimodal isualization. We propose a widget with simple brushing interactions that enables the user to quickly correct the segmented histogram results. Our semiautomated system enables domain experts to intuitively explore large multimodal datasets without the need for either advanced segmentation algorithms or knowledge of visualization echniques. We demonstrate our approach using synthetic examples, industrial phantom objects created to stress multimodal scanning techniques, and real-world objects, and we discuss expert feedback.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Huang, Xuan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Miao, Haichao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kim, Hyojin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Townsend, Andrew"},{"affiliations":"","email":"","is_corresponding":false,"name":"Champley, Kyle"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tringe, Joseph"},{"affiliations":"","email":"","is_corresponding":false,"name":"Pascucci, Valerio"},{"affiliations":"","email":"","is_corresponding":false,"name":"Bremer, Peer-Timo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xuan Huang"],"doi":"10.1109/TVCG.2024.3382607","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243382607","time_end":"","time_stamp":"","time_start":"","title":"Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data","uid":"v-tvcg-20243382607","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243382760":{"abstract":"Time-stamped event sequences (TSEQs) are time-oriented data without value information, shifting the focus of users to the exploration of temporal event occurrences. TSEQs exist in application domains, such as sleeping behavior, earthquake aftershocks, and stock market crashes. Domain experts face four challenges, for which they could use interactive and visual data analysis methods. First, TSEQs can be large with respect to both the number of sequences and events, often leading to millions of events. Second, domain experts need validated metrics and features to identify interesting patterns. Third, after identifying interesting patterns, domain experts contextualize the patterns to foster sensemaking. Finally, domain experts seek to reduce data complexity by data simplification and machine learning support. We present IVESA, a visual analytics approach for TSEQs. It supports the analysis of TSEQs at the granularities of sequences and events, supported with metrics and feature analysis tools. IVESA has multiple linked views that support overview, sort+filter, comparison, details-on-demand, and metadata relation-seeking tasks, as well as data simplification through feature analysis, interactive clustering, filtering, and motif detection and simplification. We evaluated IVESA with three case studies and a user study with six domain experts working with six different datasets and applications. Results demonstrate the usability and generalizability of IVESA across applications and cases that had up to 1,000,000 events.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"J\u00fcrgen Bernard"},{"affiliations":"","email":"","is_corresponding":false,"name":"Clara-Maria Barth"},{"affiliations":"","email":"","is_corresponding":false,"name":"Eduard Cuba"},{"affiliations":"","email":"","is_corresponding":false,"name":"Andrea Meier"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yasara Peiris"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ben Shneiderman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["J\u00fcrgen Bernard"],"doi":"10.1109/TVCG.2024.3382760","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Time-Stamped Event Sequences, Time-Oriented Data, Visual Analytics, Data-First Design Study, Iterative Design, Visual Interfaces, User Evaluation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243382760","time_end":"","time_stamp":"","time_start":"","title":"Visual Analysis of Time-Stamped Event Sequences","uid":"v-tvcg-20243382760","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243383089":{"abstract":"The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Qing Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ying Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ruishi Zou"},{"affiliations":"","email":"","is_corresponding":false,"name":"Wei Shuai"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yi Guo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiazhe Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nan Cao"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qing Chen"],"doi":"10.1109/TVCG.2024.3383089","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Representation Learning, Multi-view Visualization, Visual Storytelling, Visualization Embedding"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243383089","time_end":"","time_stamp":"","time_start":"","title":"Chart2Vec: A Universal Embedding of Context-Aware Visualizations","uid":"v-tvcg-20243383089","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243385118":{"abstract":"Genomics is at the core of precision medicine, and there are high expectations on genomics-enabled improvement of patient outcomes in the years to come. Around the world, initiatives to increase the use of DNA sequencing in clinical routine are being deployed, such as the use of broad panels in the standard care for oncology patients. Such a development comes at the cost of increased demands on throughput in genomic data analysis. In this paper, we use the task of copy number variant (CNV) analysis as a context for exploring visualization concepts for clinical genomics. CNV calls are generated algorithmically, but time-consuming manual intervention is needed to separate relevant findings from irrelevant ones in the resulting large call candidate lists. We present a visualization environment, named Copycat, to support this review task in a clinical scenario. Key components are a scatter-glyph plot replacing the traditional list visualization, and a glyph representation designed for at-a-glance relevance assessments. Moreover, we present results from a formative evaluation of the prototype by domain specialists, from which we elicit insights to guide both prototype improvements and visualization for clinical genomics in general.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Emilia St\u00e5hlbom"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jesper Molin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Claes Lundstr\u00f6m"},{"affiliations":"","email":"","is_corresponding":false,"name":"Anders Ynnerman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Emilia St\u00e5hlbom"],"doi":"10.1109/TVCG.2024.3385118","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, genomics, copy number variants, clinical decision support, evaluation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243385118","time_end":"","time_stamp":"","time_start":"","title":"Visualization for diagnostic review of copy number variants in complex DNA sequencing data","uid":"v-tvcg-20243385118","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243390219":{"abstract":"This system paper documents the technical foundations for the extension of the Topology ToolKit (TTK) to distributed-memory parallelism with the Message Passing Interface (MPI). While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a versatile approach (supporting both triangulated domains and regular grids) for the support of topological analysis pipelines, i.e. a sequence of topological algorithms interacting together, possibly on distinct numbers of processes. While developing this extension, we faced several algorithmic and software engineering challenges, which we document in this paper. We describe an MPI extension of TTK\u2019s data structure for triangulation representation and traversal, a central component to the global performance and generality of TTK\u2019s topological implementations. We also introduce an intermediate interface between TTK and MPI, both at the global pipeline level, and at the fine-grain algorithmic level. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Detailed performance analyses show that parallel efficiencies range from 20% to 80% (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (120 billion vertices) on a standard cluster with 64 nodes (for a total of 1536 cores). Finally, we provide a roadmap for the completion of TTK\u2019s MPI extension, along with generic recommendations for each algorithm communication category.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"E. Le Guillou"},{"affiliations":"","email":"","is_corresponding":false,"name":"M. Will"},{"affiliations":"","email":"","is_corresponding":false,"name":"P. Guillou"},{"affiliations":"","email":"","is_corresponding":false,"name":"J. Lukasczyk"},{"affiliations":"","email":"","is_corresponding":false,"name":"P. Fortin"},{"affiliations":"","email":"","is_corresponding":false,"name":"C. Garth"},{"affiliations":"","email":"","is_corresponding":false,"name":"J. Tierny"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Julien Tierny"],"doi":"10.1109/TVCG.2024.3390219","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Topological data analysis, high-performance computing, distributed-memory algorithms."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243390219","time_end":"","time_stamp":"","time_start":"","title":"TTK is Getting MPI-Ready","uid":"v-tvcg-20243390219","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243392476":{"abstract":"Areas of interest (AOIs) are well-established means of providing semantic information for visualizing, analyzing, and classifying gaze data. However, the usual manual annotation of AOIs is time consuming and further impaired by ambiguities in label assignments. To address these issues, we present an interactive labeling approach that combines visualization, machine learning, and user-centered explainable annotation. Our system provides uncertainty-aware visualization to build trust in classification with an increasing number of annotated examples. It combines specifically designed EyeFlower glyphs, dimensionality reduction, and selection and exploration techniques in an integrated workflow. The approach is versatile and hardware-agnostic, supporting video stimuli from stationary and unconstrained mobile eye trackin alike. We conducted an expert review to assess labeling strategies and trust building.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Maurice Koch"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nan Cao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Daniel Weiskopf"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kuno Kurzhals"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maurice Koch"],"doi":"10.1109/TVCG.2024.3392476","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, eye tracking, uncertainty, active learning, trust building"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243392476","time_end":"","time_stamp":"","time_start":"","title":"Active Gaze Labeling: Visualization for Trust Building","uid":"v-tvcg-20243392476","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243392587":{"abstract":"The issue of traffic congestion poses a significant obstacle to the development of global cities. One promising solution to tackle this problem is intelligent traffic signal control (TSC). Recently, TSC strategies leveraging reinforcement learning (RL) have garnered attention among researchers. However, the evaluation of these models has primarily relied on fixed metrics like reward and queue length. This limited evaluation approach provides only a narrow view of the model\u2019s decision-making process, impeding its practical implementation. Moreover, effective TSC necessitates coordinated actions across multiple intersections. Existing visual analysis solutions fall short when applied in multi-agent settings. In this study, we delve into the challenge of interpretability in multi-agent reinforcement learning (MARL), particularly within the context of TSC. We propose MARLens, a visual analytics system tailored to understand MARL-based TSC. Our system serves as a versatile platform for both RL and TSC researchers. It empowers them to explore the model\u2019s features from various perspectives, revealing its decision-making processes and shedding light on interactions among different agents. To facilitate quick identification of critical states, we have devised multiple visualization views, complemented by a traffic simulation module that allows users to replay specific training scenarios. To validate the utility of our proposed system, we present three comprehensive case studies, incorporate insights from domain experts through interviews, and conduct a user study. These collective efforts underscore the feasibility and effectiveness of MARLens in enhancing our understanding of MARL-based TSC systems and pave the way for more informed and efficient traffic management strategies.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Yutian Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Guohong Zheng"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zhiyuan Liu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Quan Li"},{"affiliations":"","email":"","is_corresponding":true,"name":"Haipeng Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haipeng Zeng"],"doi":"10.1109/TVCG.2024.3392587","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Traffic signal control, multi-agent, reinforcement learning, visual analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243392587","time_end":"","time_stamp":"","time_start":"","title":"MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics","uid":"v-tvcg-20243392587","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243394745":{"abstract":"The fund investment industry heavily relies on the expertise of fund managers, who bear the responsibility of managing portfolios on behalf of clients. With their investment knowledge and professional skills, fund managers gain a competitive advantage over the average investor in the market. Consequently, investors prefer entrusting their investments to fund managers rather than directly investing in funds. For these investors, the primary concern is selecting a suitable fund manager. While previous studies have employed quantitative or qualitative methods to analyze various aspects of fund managers, such as performance metrics, personal characteristics, and performance persistence, they often face challenges when dealing with a large candidate space. Moreover, distinguishing whether a fund manager's performance stems from skill or luck poses a challenge, making it difficult to align with investors' preferences in the selection process. To address these challenges, this study characterizes the requirements of investors in selecting suitable fund managers and proposes an interactive visual analytics system called FMLens. This system streamlines the fund manager selection process, allowing investors to efficiently assess and deconstruct fund managers' investment styles and abilities across multiple dimensions. Additionally, the system empowers investors to scrutinize and compare fund managers' performances. The effectiveness of the approach is demonstrated through two case studies and a qualitative user study. Feedback from domain experts indicates that the system excels in analyzing fund managers from diverse perspectives, enhancing the efficiency of fund manager evaluation and selection.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Longfei Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chen Cheng"},{"affiliations":"","email":"","is_corresponding":false,"name":"He Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xiyuan Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yun Tian"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xuanwu Yue"},{"affiliations":"","email":"","is_corresponding":false,"name":"Wong Kam-Kwai"},{"affiliations":"","email":"","is_corresponding":false,"name":"Haipeng Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Suting Hong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Quan Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Longfei Chen"],"doi":"10.1109/TVCG.2024.3394745","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Financial Data, Fund Manager Selection, Visual Analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243394745","time_end":"","time_stamp":"","time_start":"","title":"FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments","uid":"v-tvcg-20243394745","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243397004":{"abstract":"Data charts are prevalent across various fields due to their efficacy in conveying complex data relationships. However, static charts may sometimes struggle to engage readers and efficiently present intricate information, potentially resulting in limited understanding. We introduce \u201cLive Charts,\u201d a new format of presentation that decomposes complex information within a chart and explains the information pieces sequentially through rich animations and accompanying audio narration. We propose an automated approach to revive static charts into Live Charts. Our method integrates GNN-based techniques to analyze the chart components and extract data from charts. Then we adopt large natural language models to generate appropriate animated visuals along with a voice-over to produce Live Charts from static ones. We conducted a thorough evaluation of our approach, which involved the model performance, use cases, a crowd-sourced user study, and expert interviews. The results demonstrate Live Charts offer a multi-sensory experience where readers can follow the information and understand the data insights better. We analyze the benefits and drawbacks of Live Charts over static charts as a new information consumption experience.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Velitchko Filipov"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alessio Arleo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Markus B\u00f6gl"},{"affiliations":"","email":"","is_corresponding":false,"name":"Silvia Miksch"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lu Ying"],"doi":"10.1109/TVCG.2024.3397004","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Charts, storytelling, machine learning, automatic visualization"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243397004","time_end":"","time_stamp":"","time_start":"","title":"Reviving Static Charts into Live Charts","uid":"v-tvcg-20243397004","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243402610":{"abstract":"Point clouds are widely used as a versatile representation of 3D entities and scenes for all scale domains and in a variety of application areas, serving as a fundamental data category to directly convey spatial features. However, due to point sparsity, lack of structure, irregular distribution, and acquisition-related inaccuracies, results of point cloud visualization are often subject to visual complexity and ambiguity. In this regard, non-photorealistic rendering can improve visual communication by reducing the cognitive effort required to understand an image or scene and by directing attention to important features. In the last 20 years, this has been demonstrated by various non-photorealistic r rendering approaches that were proposed to target point clouds specifically. However, they do not use a common language or structure for assessment which complicates comparison and selection. Further, recent developments regarding point cloud characteristics and processing, such as massive data size or web-based rendering are rarely considered. To address these issues, we present a survey on non-photorealistic rendering approaches for point cloud visualization, providing an overview of the current state of research. We derive a structure for the assessment of approaches, proposing seven primary dimensions for the categorization regarding intended goals, data requirements, used techniques, and mode of operation. We then systematically assess corresponding approaches and utilize this classification to identify trends and research gaps, motivating future research in the development of effective non-photorealistic point cloud rendering methods.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Ole Wegen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Willy Scheibel"},{"affiliations":"","email":"","is_corresponding":false,"name":"Matthias Trapp"},{"affiliations":"","email":"","is_corresponding":false,"name":"Rico Richter"},{"affiliations":"","email":"","is_corresponding":false,"name":"J\u00fcrgen D\u00f6llner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ole Wegen"],"doi":"10.1109/TVCG.2024.3402610","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Point clouds, survey, non-photorealistic rendering"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243402610","time_end":"","time_stamp":"","time_start":"","title":"A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization","uid":"v-tvcg-20243402610","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243402834":{"abstract":"Impact dynamics are crucial for estimating the growth patterns of NFT projects by tracking the diffusion and decay of their relative appeal among stakeholders. Machine learning methods for impact dynamics analysis are incomprehensible and rigid in terms of their interpretability and transparency, whilst stakeholders require interactive tools for informed decision-making. Nevertheless, developing such a tool is challenging due to the substantial, heterogeneous NFT transaction data and the requirements for flexible, customized interactions. To this end, we integrate intuitive visualizations to unveil the impact dynamics of NFT projects. We first conduct a formative study and summarize analysis criteria, including substitution mechanisms, impact attributes, and design requirements from stakeholders. Next, we propose the Minimal Substitution Model to simulate substitutive systems of NFT projects that can be feasibly represented as node-link graphs. Particularly, we utilize attribute-aware techniques to embed the project status and stakeholder behaviors in the layout design. Accordingly, we develop a multi-view visual analytics system, namely NFTracer, allowing interactive analysis of impact dynamics in NFT transactions. We demonstrate the informativeness, effectiveness, and usability of NFTracer by performing two case studies with domain experts and one user study with stakeholders. The studies suggest that NFT projects featuring a higher degree of similarity are more likely to substitute each other. The impact of NFT projects within substitutive systems is contingent upon the degree of stakeholders\u2019 influx and projects\u2019 freshness.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yifan Cao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Qing Shi"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lucas Shen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kani Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yang Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Wei Zeng"},{"affiliations":"","email":"","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yifan Cao"],"doi":"10.1109/TVCG.2024.3402834","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Stakeholders, Nonfungible Tokens, Social Networking Online, Visual Analytics, Network Analyzers, Measurement, Layout, Impact Dynamics Analysis, Non Fungible Tokens NF Ts, NFT Transaction Data, Substitutive Systems, Visual Analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243402834","time_end":"","time_stamp":"","time_start":"","title":"Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics","uid":"v-tvcg-20243402834","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243406387":{"abstract":"The process of labeling medical text plays a crucial role in medical research. Nonetheless, creating accurately labeled medical texts of high quality is often a time-consuming task that requires specialized domain knowledge. Traditional methods for generating labeled data typically rely on rigid rule-based approaches, which may not adapt well to new tasks. While recent machine learning (ML) methodologies have mitigated the manual labeling efforts, configuring models to align with specific research requirements can be challenging for labelers without technical expertise. Moreover, automated labeling techniques, such as transfer learning, face difficulties in in directly incorporating expert input, whereas semi-automated methods, like data programming, allow knowledge integration through rules or knowledge bases but may lack continuous result refinement throughout the entire labeling process. In this study, we present a collaborative human-ML teaming workflow that seamlessly integrates visual cluster analysis and active learning to assist domain experts in labeling medical text with high efficiency. Additionally, we introduce an innovative neural network model called the embedding network, which incorporates expert insights to generate task-specific embeddings for medical texts. We integrate the workflow and embedding network into a visual analytics tool named KMTLabeler, equipped with coordinated multi-level views and interactions. Two illustrative case studies, along with a controlled user study, provide substantial evidence of the effectiveness of KMTLabeler in creating an efficient labeling environment for medical text classification.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"He Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yang Ouyang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yuchen Wu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chang Jiang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lixia Jin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yuanwu Cao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Quan Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["He Wang"],"doi":"10.1109/TVCG.2024.3406387","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Medical Text Labeling, Expert Knowledge, Embedding Network, Visual Cluster Analysis, Active Learning"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243406387","time_end":"","time_stamp":"","time_start":"","title":"KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification","uid":"v-tvcg-20243406387","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243408255":{"abstract":"Generative text-to-image models, which allow users to create appealing images through a text prompt, have seen a dramatic increase in popularity in recent years. However, most users have a limited understanding of how such models work and often rely on trial and error strategies to achieve satisfactory results. The prompt history contains a wealth of information that could provide users with insights into what has been explored and how the prompt changes impact the output image, yet little research attention has been paid to the visual analysis of such process to support users. We propose the Image Variant Graph, a novel visual representation designed to support comparing prompt-image pairs and exploring the editing history. The Image Variant Graph models prompt differences as edges between corresponding images and presents the distances between images through projection. Based on the graph, we developed the PrompTHis system through co-design with artists. Based on the review and analysis of the prompting history, users can better understand the impact of prompt changes and have a more effective control of image generation. A quantitative user study and qualitative interviews demonstrate that PrompTHis can help users review the prompt history, make sense of the model, and plan their creative process.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yuhan Guo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hanning Shao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Can Liu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kai Xu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xiaoru Yuan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuhan Guo"],"doi":"10.1109/TVCG.2024.3408255","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Text visualization, image visualization, text-to-image generation, editing history, provenance, generative art"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243408255","time_end":"","time_stamp":"","time_start":"","title":"PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation","uid":"v-tvcg-20243408255","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243411575":{"abstract":"Creating an animated data video with audio narration is a time-consuming and complex task that requires expertise. It involves designing complex animations, turning written scripts into audio narrations, and synchronizing visual changes with the narrations. This paper presents WonderFlow, an interactive authoring tool, that facilitates narration-centric design of animated data videos. WonderFlow allows authors to easily specify semantic links between text and the corresponding chart elements. Then it automatically generates audio narration by leveraging text-to-speech techniques and aligns the narration with an animation. WonderFlow provides a structure-aware animation library designed to ease chart animation creation, enabling authors to apply pre-designed animation effects to common visualization components. Additionally, authors can preview and refine their data videos within the same system, without having to switch between different creation tools. A series of evaluation results confirmed that WonderFlow is easy to use and simplifies the creation of data videos with narration-animation interplay.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Yun Wang"},{"affiliations":"","email":"","is_corresponding":true,"name":"Leixian Shen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zhengxin You"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Bongshin Lee"},{"affiliations":"","email":"","is_corresponding":false,"name":"John Thompson"},{"affiliations":"","email":"","is_corresponding":false,"name":"Haidong Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Dongmei Zhang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leixian Shen"],"doi":"10.1109/TVCG.2024.3411575","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data video, Data visualization, Narration-animation interplay, Storytelling, Authoring tool"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243411575","time_end":"","time_stamp":"","time_start":"","title":"WonderFlow: Narration-Centric Design of Animated Data Videos","uid":"v-tvcg-20243411575","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243411786":{"abstract":"We present a novel method for the interactive construction and rendering of extremely large molecular scenes, capable of representing multiple biological cells in atomistic detail. Our method is tailored for scenes, which are procedurally constructed, based on a given set of building rules. Rendering of large scenes normally requires the entire scene available in-core, or alternatively, it requires out-of-core management to load data into the memory hierarchy as a part of the rendering loop. Instead of out-of-core memory management, we propose to procedurally generate the scene on-demand on the fly. The key idea is a positional- and view-dependent procedural scene-construction strategy, where only a fraction of the atomistic scene around the camera is available in the GPU memory at any given time. The atomistic detail is populated into a uniform-space partitioning using a grid that covers the entire scene. Most of the grid cells are not filled with geometry, only those are populated that are potentially seen by the camera. The atomistic detail is populated in a compute shader and its representation is connected with acceleration data structures for hardware ray-tracing of modern GPUs. Objects which are far away, where atomistic detail is not perceivable from a given viewpoint, are represented by a triangle mesh mapped with a seamless texture, generated from the rendering of geometry from atomistic detail. The algorithm consists of two pipelines, the construction-compute pipeline, and the rendering pipeline, which work together to render molecular scenes at an atomistic resolution far beyond the limit of the GPU memory containing trillions of atoms. We demonstrate our technique on multiple models of SARS-CoV-2 and the red blood cell.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Ruwayda Alharbi"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ond\u02c7rej Strnad"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tobias Klein"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ivan Viola"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ruwayda Alharbi"],"doi":"10.1109/TVCG.2024.3411786","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Interactive rendering, view-guided scene construction, biological data, hardware ray tracing"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243411786","time_end":"","time_stamp":"","time_start":"","title":"\u201cNanomatrix: Scalable Construction of Crowded Biological Environments\u201d","uid":"v-tvcg-20243411786","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"v-tvcg-20243413195":{"abstract":"With the growing complexity and volume of data, visualizations have become more intricate, often requiring advanced techniques to convey insights. These complex charts are prevalent in everyday life, and individuals who lack knowledge in data visualization may find them challenging to understand. This paper investigates using Large Language Models (LLMs) to help users with low data literacy understand complex visualizations. While previous studies focus on text interactions with users, we noticed that visual cues are also critical for interpreting charts. We introduce an LLM application that supports both text and visual interaction for guiding chart interpretation. Our study with 26 participants revealed that the in-situ support effectively assisted users in interpreting charts and enhanced learning by addressing specific chart-related questions and encouraging further exploration. Visual communication allowed participants to convey their interests straightforwardly, eliminating the need for textual descriptions. However, the LLM assistance led users to engage less with the system, resulting in fewer insights from the visualizations. This suggests that users, particularly those with lower data literacy and motivation, may have over-relied on the LLM agent. We discuss opportunities for deploying LLMs to enhance visualization literacy while emphasizing the need for a balanced approach.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Kiroong Choe"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chaerin Lee"},{"affiliations":"","email":"","is_corresponding":false,"name":"Soohyun Lee"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiwon Song"},{"affiliations":"","email":"","is_corresponding":false,"name":"Aeri Cho"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nam Wook Kim"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kiroong Choe"],"doi":"10.1109/TVCG.2024.3413195","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization literacy, Large language model, Visual communication"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243413195","time_end":"","time_stamp":"","time_start":"","title":"Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation","uid":"v-tvcg-20243413195","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1001":{"abstract":"I analyze the evolution of papers certified by the Graphics Replicability Stamp Initiative (GRSI) to be reproducible, with a specific focus on the subset of publications that address visualization-related topics. With this analysis I show that, while the number of papers is increasing overall and within the visualization field, we still have to improve quite a bit to escape the replication crisis. I base my analysis on the data published by the GRSI as well as publication data for the different venues in visualization and lists of journal papers that have been presented at visualization-focused conferences. I also analyze the differences between the involved journals as well as the percentage of reproducible papers in the different presentation venues. Furthermore, I look at the authors of the publications and, in particular, their affiliation countries to see where most reproducible papers come from. Finally, I discuss potential reasons for the low reproducibility numbers and suggest possible ways to overcome these obstacles. This paper is reproducible itself, with source code and data available from github.com/tobiasisenberg/Visualization-Reproducibility as well as a free paper copy and all supplemental materials at osf.io/mvnbj.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":true,"name":"Tobias Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias Isenberg"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1001","time_end":"","time_stamp":"","time_start":"","title":"The State of Reproducibility Stamps for Visualization Research Papers","uid":"w-beliv-1001","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1004":{"abstract":"In the rapidly evolving field of information visualization, rigorous evaluation is essential for validating new techniques, understanding user interactions, and demonstrating the effectiveness of visualizations. The evaluation of visualization systems is fundamental to ensuring their effectiveness, usability, and impact. Faithful evaluations provide valuable insights into how users interact with and perceive the system, enabling designers to make informed decisions about design choices and improvements. However, an emerging trend of multiple evaluations within a single study raises critical questions about the sustainability, feasibility, and methodological rigor of such an approach. So, how many evaluations are enough? is a situational question and cannot be formulaically determined. Our objective is to summarize current trends and patterns to understand general practices across different contribution and evaluation types. New researchers and students, influenced by this trend, may believe-- multiple evaluations are necessary for a study. However, the number of evaluations in a study should depend on its contributions and merits, not on the trend of including multiple evaluations to strengthen a paper. In this position paper, we identify this trend through a non-exhaustive literature survey of TVCG papers from issue 1 in 2023 and 2024. We then discuss various evaluation strategy patterns in the information visualization field and how this paper will open avenues for further discussion.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina at Chapel Hill, Chapel Hill, United States"],"email":"flin@unc.edu","is_corresponding":false,"name":"Feng Lin"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"dilshadur@sci.utah.edu","is_corresponding":false,"name":"Md Dilshadur Rahman"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":true,"name":"Ghulam Jilani Quadri"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ghulam Jilani Quadri"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1004","time_end":"","time_stamp":"","time_start":"","title":"How Many Evaluations are Enough? A Position Paper on Evaluation Trend in Information Visualization","uid":"w-beliv-1004","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1005":{"abstract":"Various standardized tests exist that assess individuals' visualization literacy. Their use can help to draw conclusions from studies. However, it is not taken into account that the test itself can create a pressure situation where participants might fear being exposed and assessed negatively. This is especially problematic when testing domain experts in design studies. We conducted interviews with experts from different domains performing the Mini-VLAT test for visualization literacy to identify potential problems. Our participants reported that the time limit per question, ambiguities in the questions and visualizations, and missing steps in the test procedure mainly had an impact on their performance and content. We discuss possible changes to the test design to address these issues and how such assessment methods could be integrated into existing evaluation procedures.","accessible_pdf":false,"authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"seyda.oeney@visus.uni-stuttgart.de","is_corresponding":true,"name":"Seyda \u00d6ney"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"moataz.abdelaal@visus.uni-stuttgart.de","is_corresponding":false,"name":"Moataz Abdelaal"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"kuno.kurzhals@visus.uni-stuttgart.de","is_corresponding":false,"name":"Kuno Kurzhals"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"paul.betz@sowi.uni-stuttgart.de","is_corresponding":false,"name":"Paul Betz"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"cordula.kropp@sowi.uni-stuttgart.de","is_corresponding":false,"name":"Cordula Kropp"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Seyda \u00d6ney"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1005","time_end":"","time_stamp":"","time_start":"","title":"Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts","uid":"w-beliv-1005","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1007":{"abstract":"In visualization, the process of transforming raw data into visually comprehensible representations is pivotal. While existing models like the Information Visualization Reference Model describe the data-to-visual mapping process, they often overlook a crucial intermediary step: design-specific transformations. This process, occurring after data transformation but before visual-data mapping, further derives data, such as groupings, layout, and statistics, that are essential to properly render the visualization. In this paper, we advocate for a deeper exploration of design-specific transformations, highlighting their importance in understanding visualization properties, particularly in relation to user tasks. We incorporate design-specific transformations into the Information Visualization Reference Model and propose a new formalism that encompasses the user task as a function over data. The resulting formalism offers three key benefits over existing visualization models: (1) describing tasks as compositions of functions, (2) enabling analysis of data transformations for visual-data mapping, and (3) empowering reasoning about visualization correctness and effectiveness. We further discuss the potential implications of this model on visualization theory and visualization experiment design.","accessible_pdf":false,"authors":[{"affiliations":["Columbia University, New York City, United States"],"email":"ewu@cs.columbia.edu","is_corresponding":true,"name":"eugene Wu"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["eugene Wu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1007","time_end":"","time_stamp":"","time_start":"","title":"Design-Specific Transforms In Visualization","uid":"w-beliv-1007","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1008":{"abstract":"Stress is among the most commonly employed quality metrics and optimization criteria for dimension reduction projections of high-dimensional data. Complex, high-dimensional data is ubiquitous across many scientific disciplines, including machine learning, biology, and the social sciences. One of the primary methods of visualizing these datasets is with two-dimensional scatter plots that visually capture some properties of the data. Because visually determining the accuracy of these plots is challenging, researchers often use quality metrics to measure the projection\u2019s accuracy or faithfulness to the full data. One of the most commonly employed metrics, normalized stress, is sensitive to uniform scaling (stretching, shrinking) of the projection, despite this act not meaningfully changing anything about the projection. We investigate the effect of scaling on stress and other distance-based quality metrics analytically and empirically by showing just how much the values change and how this affects dimension reduction technique evaluations. We introduce a simple technique to make normalized stress scale-invariant and show that it accurately captures expected behavior on a small benchmark.","accessible_pdf":false,"authors":[{"affiliations":["University of Arizona, Tucson, United States"],"email":"ksmelser@arizona.edu","is_corresponding":false,"name":"Kiran Smelser"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"jacobmiller1@arizona.edu","is_corresponding":true,"name":"Jacob Miller"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"stephen.kobourov@tum.de","is_corresponding":false,"name":"Stephen Kobourov"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jacob Miller"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1008","time_end":"","time_stamp":"","time_start":"","title":"Normalized Stress is Not Normalized: How to Interpret Stress Correctly","uid":"w-beliv-1008","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1009":{"abstract":"The cognitive processes involved in understanding and misunderstanding visualizations have not yet been fully clarified, even for well-studied designs, such as bar charts. In particular, little is known about whether viewers can improve their learning processes by getting better insight into their own cognition. This paper describes a simple method to measure the role of such metacognitive understanding when learning to read bar charts. For this purpose, we conducted an experiment in which we investigated bar chart learning repeatedly, and tested how learning over trials was effected by metacognitive understanding. We integrate the findings into a model of metacognitive processing of visualizations, and discuss implications for the design of visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Heidelberg University, Heidelberg, Germany"],"email":"antonia.schlieder@t-online.de","is_corresponding":true,"name":"Antonia Schlieder"},{"affiliations":["Heidelberg University, Heidelberg, Germany"],"email":"jan.rummel@psychologie.uni-heidelberg.de","is_corresponding":false,"name":"Jan Rummel"},{"affiliations":["Ruprecht-Karls-Universit\u00e4t Heidelberg, Heidelberg, Germany"],"email":"palbers@mathi.uni-heidelberg.de","is_corresponding":false,"name":"Peter Albers"},{"affiliations":["Heidelberg University, Heidelberg, Germany"],"email":"sadlo@uni-heidelberg.de","is_corresponding":false,"name":"Filip Sadlo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Antonia Schlieder"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1009","time_end":"","time_stamp":"","time_start":"","title":"The Role of Metacognition in Understanding Deceptive Bar Charts","uid":"w-beliv-1009","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1015":{"abstract":"Empirical studies in visualisation often compare visual representations to identify the most effective visualisation for a particular visual judgement or decision making task. However, the effectiveness of a visualisation may be intrinsically related to, and difficult to distinguish from, factors such as visualisation literacy. Complicating matters further, visualisation literacy itself is not a singular intrinsic quality, but can be a result of several distinct challenges that a viewer encounters when performing a task with a visualisation. In this paper, we describe how such challenges apply to experiments that we use to evaluate visualisations, and discuss a set of considerations for designing studies in the future. Finally, we argue that aspects of the study design which are often neglected or overlooked (such as the onboarding of participants, tutorials, training etc.) can have a big role in the results of a study and can potentially impact the conclusions that the researchers can draw from the study.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"abhraneel@u.northwestern.edu","is_corresponding":true,"name":"Abhraneel Sarma"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"shenglong@u.northwestern.edu","is_corresponding":false,"name":"Sheng Long"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Abhraneel Sarma"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1015","time_end":"","time_stamp":"","time_start":"","title":"Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design","uid":"w-beliv-1015","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1016":{"abstract":"This position paper critically examines the graphical inference framework for evaluating visualizations using the lineup task. We present a re-analysis of lineup task data using signal detection theory, applying four Bayesian non-linear models to investigate whether color ramps with more color name variation increase false discoveries. Our study utilizes data from Reda and Szafir\u2019s previous work [20], corroborating their findings while providing additional insights into sensitivity and bias differences across colormaps and individuals. We suggest improvements to lineup study designs and explore the connections between graphical inference, signal detection theory, and statistical decision theory. Our work contributes a more perceptually grounded approach for assessing visualization effectiveness and offers a path forward for better aligning graphical inference methods with human cognition. The results have implications for the development and evaluation of visualizations, particularly for exploratory data analysis scenarios. Supplementary materials are available at https://osf.io/xd5cj/.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"shenglong@u.northwestern.edu","is_corresponding":true,"name":"Sheng Long"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sheng Long"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1016","time_end":"","time_stamp":"","time_start":"","title":"Old Wine in a New Bottle? Analysis of Visual Lineups with Signal Detection Theory","uid":"w-beliv-1016","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1018":{"abstract":"Visualising personal experiences is often described as a means for self-reflection, shaping one\u2019s identity, and sharing it with others. In policymaking, personal narratives are regarded as an important source of intelligence to shape public discourse and policy. Therefore, policymakers are interested in the interplay between individual-level experiences and macro-political processes that play into shaping these experiences. In this context, visualisation is regarded as a medium for advocacy, creating a power balance between individuals and the power structures that influence their health and well-being. In this paper, we offer a politically-framed reflection on how visualisation creators define lived experience data, and what design choices they make for visualising them. We identify data characteristics and design choices that enable visualisation authors and consumers to engage in a process of narrative co-construction, while navigating structural forms of inequality. Our political framing is driven by ideas of master and alternative narratives from Diversity Science, in which authors and narrators engage in a process of negotiation with power structures to either maintain or challenge the status quo.","accessible_pdf":false,"authors":[{"affiliations":["City, University of London, London, United Kingdom"],"email":"mai.elshehaly@city.ac.uk","is_corresponding":true,"name":"Mai Elshehaly"},{"affiliations":["City, University of London, London, United Kingdom"],"email":"mirela.reljan-delaney@city.ac.uk","is_corresponding":false,"name":"Mirela Reljan-Delaney"},{"affiliations":["City, University of London, London, United Kingdom"],"email":"j.dykes@city.ac.uk","is_corresponding":false,"name":"Jason Dykes"},{"affiliations":["City, University of London, London, United Kingdom"],"email":"a.slingsby@city.ac.uk","is_corresponding":false,"name":"Aidan Slingsby"},{"affiliations":["City, University of London, London, United Kingdom"],"email":"j.d.wood@city.ac.uk","is_corresponding":false,"name":"Jo Wood"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"sam.spiegel@ed.ac.uk","is_corresponding":false,"name":"Sam Spiegel"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mai Elshehaly"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1018","time_end":"","time_stamp":"","time_start":"","title":"Visualising Lived Experience: Learning from a Master and Alternative Narrative Framing","uid":"w-beliv-1018","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1020":{"abstract":"The generation and presentation of counterfactual explanations (CFEs) are a commonly used, model-agnostic approach to helping end-users reason about the validity of AI/ML model outputs. By demonstrating how sensitive the model's outputs are to minor variations, CFEs are thought to improve understanding of the model's behavior, identify potential biases, and increase the transparency of 'black box models'. Here, we examine how CFEs support a diverse audience, both with and without technical expertise, to understand the results of an LLM-informed sentiment analysis. We conducted a preliminary pilot study with ten individuals with varied expertise from ranging NLP, ML, and ethics, to specific domains. All individuals were actively using or working with AI/ML technology as part of their daily jobs. Through semi-structured interviews grounded in a set of concrete examples, we examined how CFEs influence participants' perceptions of the model's correctness, fairness, and trustworthiness, and how visualization of CFEs specifically influences those perceptions. We also surface how participants wrestle with their internal definitions of `explainability', relative to what CFEs present, their cultures, and backgrounds, in addition to the, much more widely studied phenomena, of comparing their baseline expectations of the model's performance. Compared to prior research, our findings highlight the sociotechnical frictions that CFEs surface but do not necessarily remedy. We conclude with the design implications of developing transparent AI/ML visualization systems for more general tasks.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"amcrisan@uwaterloo.ca","is_corresponding":true,"name":"Anamaria Crisan"},{"affiliations":["Tableau Software, Seattle, United States"],"email":"nbutters@salesforce.com","is_corresponding":false,"name":"Nathan Butters"},{"affiliations":["Tableau Software, Seattle, United States"],"email":"zoezoezoe.cc@gmail.com","is_corresponding":false,"name":"Zoe Zoe"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anamaria Crisan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1020","time_end":"","time_stamp":"","time_start":"","title":"Exploring Subjective Notions of Explainability through Counterfactual Visualization of Sentiment Analysis","uid":"w-beliv-1020","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1021":{"abstract":"The replication crisis has spawned a revolution in scientific methods, aimed at increasing the transparency, robustness, and reliability of scientific outcomes. In particular, the practice of preregistering study designs has shown important advantages. Preregistration can help limit questionable research practices, as well as increase the success rate of study replications. Many fields have now adopted preregistration as a default expectation for published studies. In 2022, we set up a panel ``Merits and Limits of User Study Preregistration'' with the overall goal of explaining the concept of preregistration to a wide VIS audience and discussing its suitability for visualization research. We report on the arguments and discussion of this panel in the hope that it can benefit the visualization community at large. All materials and a copy of this paper are available on our OSF repository at https://osf.io/wes57/.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":true,"name":"Lonni Besan\u00e7on"},{"affiliations":["University of Virginia, Charlottesville, United States"],"email":"nosek@virginia.edu","is_corresponding":false,"name":"Brian Nosek"},{"affiliations":["Tilburg University, Tilburg, Netherlands"],"email":"t.l.haven@tilburguniversity.edu","is_corresponding":false,"name":"Tamarinde Haven"},{"affiliations":["Link\u00f6ping University, N\u00f6rrkoping, Sweden"],"email":"miriah.meyer@liu.se","is_corresponding":false,"name":"Miriah Meyer"},{"affiliations":["Northeastern University, Boston, United States"],"email":"c.dunne@northeastern.edu","is_corresponding":false,"name":"Cody Dunne"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"mohammad.ghoniem@gmail.com","is_corresponding":false,"name":"Mohammad Ghoniem"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lonni Besan\u00e7on"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1021","time_end":"","time_stamp":"","time_start":"","title":"Merits and Limits of Preregistration for Visualization Research","uid":"w-beliv-1021","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1026":{"abstract":"Despite 30+ years of academic practice, visualization still lacks an explanation of how and why it functions in complex organizations performing knowledge work. This survey examines the intersection of organizational studies and visualization design, highlighting the concept of \\textit{boundary objects}, which visualization practitioners are adopting in both CSCW (computer-supported collaborative work) and HCI. This paper also collects the prior literature on boundary objects in visualization design studies, a methodology which maps closely to action research in organizations, and addresses the same problems of `knowing in common'. Process artifacts generated by visualization design studies function as boundary objects in their own right, facilitating knowledge transfer across disciplines within an organization. Currently, visualization faces the challenge of explaining how sense-making functions across domains, through visualization artifacts, and how these support decision-making. As a deeply interdisciplinary field, visualization should adopt the theory of boundary objects in order to embrace its plurality of domains and systems, whilst empowering its practitioners with a unified process-based theory.","accessible_pdf":false,"authors":[{"affiliations":["UC Santa Cruz, Santa Cruz, United States"],"email":"jtotto@ucsc.edu","is_corresponding":true,"name":"Jasmine Tan Otto"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"sd@scottdavidoff.com","is_corresponding":false,"name":"Scott Davidoff"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jasmine Tan Otto"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1026","time_end":"","time_stamp":"","time_start":"","title":"Visualization Artifacts are Boundary Objects","uid":"w-beliv-1026","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1027":{"abstract":"Foundation models for vision and language are the basis of AI applications across numerous sectors of society. The success of these models stems from their ability to mimic human capabilities, namely visual perception in vision models, and analytical reasoning in large language models. As visual perception and analysis are fundamental to data visualization, in this position paper we ask: how can we harness foundation models to advance progress in visualization design? Specifically, how can multimodal foundation models (MFMs) guide visualization design through visual perception? We approach these questions by investigating the effectiveness of MFMs for perceiving visualization, and formalizing the overall visualization design and optimization space. Specifically, we think that MFMs can best be viewed as judges, equipped with the ability to criticize visualizations, and provide us with actions on how to improve a visualization. We provide a deeper characterization for text-to-image generative models, and multi-modal large language models, organized by what these models provide as output, and how to utilize the output for guiding design decisions. We hope that our perspective can inspire researchers in visualization on how to approach MFMs for visualization design.","accessible_pdf":false,"authors":[{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"matthew.berger@vanderbilt.edu","is_corresponding":true,"name":"Matthew Berger"},{"affiliations":["Lawrence Livermore National Laboratory , Livermore, United States"],"email":"shusenl@sci.utah.edu","is_corresponding":false,"name":"Shusen Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Matthew Berger"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1027","time_end":"","time_stamp":"","time_start":"","title":"[position paper] The Visualization JUDGE : Can Multimodal Foundation Models Guide Visualization Design Through Visual Perception?","uid":"w-beliv-1027","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1033":{"abstract":"Submissions of original research that use Large Language Models (LLMs) or that study their behavior, suddenly account for a sizable portion of works submitted and accepted to visualization (VIS) conferences and similar venues in human-computer interaction (HCI). In this brief position paper, I argue that reviewers are relatively unprepared to evaluate these submissions effectively. To support this conjecture I reflect on my experience serving on four program committees for VIS and HCI conferences over the past year. I will describe common reviewer critiques that I observed and highlight how these critiques influence the review process. I also raise some concerns about these critiques that could limit applied LLM research to all but the best-resourced labs. While I conclude with suggestions for evaluating research contributions that incorporate LLMs, the ultimate goal of this position paper is to simulate a discussion on the review process and its challenges.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"amcrisan@uwaterloo.ca","is_corresponding":true,"name":"Anamaria Crisan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anamaria Crisan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1033","time_end":"","time_stamp":"","time_start":"","title":"We Don't Know How to Assess LLM Contributions in VIS/HCI","uid":"w-beliv-1033","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1034":{"abstract":"This paper revisits the role of quantitative and qualitative methods in visualization research in the context of advancements in artificial intelligence (AI). The focus is on how we can bridge between the different methods in an integrated process of analyzing user study data. To this end, a process model of - potentially iterated - semantic enrichment of data is proposed. This joint perspective of data and semantics facilitates the integration of quantitative and qualitative methods. The model is motivated by examples of prior work, especially in the area of eye tracking user studies and coding data-rich observations. Finally, there is a discussion of open issues and research opportunities in the interplay between AI and qualitative and quantitative methods for visualization research.","accessible_pdf":false,"authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":true,"name":"Daniel Weiskopf"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Weiskopf"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1034","time_end":"","time_stamp":"","time_start":"","title":"Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in the Light of Advanced AI","uid":"w-beliv-1034","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1035":{"abstract":"Complexity is often seen as a inherent negative in information design, with the job of the designer being to reduce or eliminate complexity, and with principles like Tufte\u2019s \u201cdata-ink ratio\u201d or \u201cchartjunk\u201d to operationalize minimalism and simplicity in visualizations. However, in this position paper, we call for a more expansive view of complexity as a design material, like color or texture or shape: an element of information design that can be used in many ways, many of which are beneficial to the goals of using data to understand the world around us. We describe complexity as a phenomenon that occurs not just in visual design but in every aspect of the sensemaking process, from data collection to interpretation. For each of these stages, we present examples of ways that these various forms of complexity can be used (or abused) in visualization design. We ultimately call on the visualization community to build a more nuanced view of complexity, to look for places to usefully integrate complexity in multiple stages of the design process, and, even when the goal is to reduce complexity, to look for the non-visual forms of complexity that may have otherwise been overlooked.","accessible_pdf":false,"authors":[{"affiliations":["University for Continuing Education Krems, Krems, Austria"],"email":"florian.windhager@donau-uni.ac.at","is_corresponding":true,"name":"Florian Windhager"},{"affiliations":["King's College London, London, United Kingdom"],"email":"alfie.abdulrahman@gmail.com","is_corresponding":false,"name":"Alfie Abdul-Rahman"},{"affiliations":["University of Applied Sciences Potsdam, Potsdam, Germany"],"email":"mark-jan.bludau@fh-potsdam.de","is_corresponding":false,"name":"Mark-Jan Bludau"},{"affiliations":["Warwick Institute for the Science of Cities, Coventry, United Kingdom"],"email":"nicole.hengesbach@posteo.de","is_corresponding":false,"name":"Nicole Hengesbach"},{"affiliations":["University of Amsterdam, Amsterdam, Netherlands"],"email":"h.lamqaddam@uva.nl","is_corresponding":false,"name":"Houda Lamqaddam"},{"affiliations":["OCAD University, Toronto, Canada"],"email":"meirelles.isabel@gmail.com","is_corresponding":false,"name":"Isabel Meirelles"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"b.speckmann@tue.nl","is_corresponding":false,"name":"Bettina Speckmann"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Florian Windhager"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1035","time_end":"","time_stamp":"","time_start":"","title":"Complexity as Design Material","uid":"w-beliv-1035","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-beliv-1037":{"abstract":"Qualitative data analysis is widely adopted for user evaluation, not only in the Visualisation community but also related communities, such as Human-Computer Interaction and Augmented and Virtual Reality. However, the data analysis process is often not clearly described and the results are often simply listed in the form of interesting quotes from or summaries of quotes that were uttered by study participants. This position paper proposes an early concept for the use of a researcher as an \u201cAdvocatus Diaboli\u201d, or devil\u2019s advocate, to try to disprove the results of the data analysis by looking for quotes that contradict the findings or leading questions and task designs. Whatever this devil\u2019s advocate finds can then be used to reiterate on the findings and the analysis process to form more suitable theories. On the other hand, researchers are enabled to clarify why they did not include this in their theory. This process could increase transparency in the qualitative data analysis process and increase trust in these findings, while being mindful of the necessary resources.","accessible_pdf":false,"authors":[{"affiliations":["University of Applied Sciences Upper Austria, Hagenberg, Austria"],"email":"judith.friedl-knirsch@fh-hagenberg.at","is_corresponding":true,"name":"Judith Friedl-Knirsch"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Judith Friedl-Knirsch"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1037","time_end":"","time_stamp":"","time_start":"","title":"Position paper: Proposing the use of an \u201cAdvocatus Diaboli\u201d as a pragmatic approach to improve transparency in qualitative data analysis and reporting","uid":"w-beliv-1037","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1007":{"abstract":"Visualizations are a critical medium not only for telling stories, but for fostering exploration. But while there are countless examples how to use visualizations for\u201cstorytelling with data,\u201d there are few guidelines on how to design visualizations for public exploration. This educator report draws on decades of work in science museums, a public context focused on designing interactive experiences for exploration, to provide evidence-based guidelines for designing exploratory visualizations. Recent studies on interactive visualizations in museums are contextualized within a larger body of museum research on designs that support exploratory learning in interactive exhibits. Synthesizing these studies highlights that to create successful exploratory visualizations, designers can apply long-standing guidelines from exhibit design but need to provide more aids for interpretation.","accessible_pdf":false,"authors":[{"affiliations":["Science Communication Lab, Berkeley, United States","University of California, San Francisco, San Francisco, United States"],"email":"jafrazier@gmail.com","is_corresponding":true,"name":"Jennifer Frazier"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jennifer Frazier"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1007","time_end":"","time_stamp":"","time_start":"","title":"Beyond storytelling with data: Guidelines for designing exploratory visualizations","uid":"w-eduvis-1007","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1008":{"abstract":"With the increasing amount of data globally, analyzing and visualizing data are becoming essential skills across various professions. It is important to equip university students with these essential data skills. To learn, design, and develop data visualization, students need knowledge of programming and data science topics. Many university programs lack dedicated data science courses for undergraduate students, making it important to introduce these concepts through integrated courses. However, combining data science and data visualization into one course can be challenging due to the time constraints and the heavy load of learning. In this paper, we discuss the development of teaching data science and data visualization together in one course and share the results of the post-course evaluation survey. From the survey's results, we identified four challenges, including difficulty in learning multiple tools and diverse data science topics, varying proficiency levels with tools and libraries, and selecting and cleaning datasets. We also distilled five opportunities for developing a successful data science and visualization course. These opportunities include clarifying the course structure, emphasizing visualization literacy early in the course, updating the course content according to student needs, using large real-world datasets, learning from industry professionals, and promoting collaboration among students.","accessible_pdf":false,"authors":[{"affiliations":["Carleton University, Ottawa, Canada"],"email":"shrihariniramesh@cmail.carleton.ca","is_corresponding":true,"name":"Shri Harini Ramesh"},{"affiliations":["Carleton University, Ottawa, Canada","Bruyere Research Institute, Ottawa, Canada"],"email":"fateme.rajabiyazdi@carleton.ca","is_corresponding":false,"name":"Fateme Rajabiyazdi"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shri Harini Ramesh"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1008","time_end":"","time_stamp":"","time_start":"","title":"Challenges and Opportunities of Teaching Data Visualization Together with Data Science","uid":"w-eduvis-1008","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1010":{"abstract":"This report examines the implementation of the Solution Framework in a social impact project facilitated by VizForSocialGood. It outlines the data visualization process, detailing each stage and offering practical insights. The framework's application demonstrates its effectiveness in enhancing project quality, efficiency, and collaboration, making it a valuable tool for educational and professional environments.","accessible_pdf":false,"authors":[{"affiliations":["Independent Information Designer, Medellin, Colombia","Independent Information Designer, Medellin, Colombia"],"email":"munozdataviz@gmail.com","is_corresponding":true,"name":"Victor Mu\u00f1oz"},{"affiliations":["Corporate Information Designer, Arlington Hts, United States","Corporate Information Designer, Arlington Hts, United States"],"email":"hellokevinford@gmail.com","is_corresponding":false,"name":"Kevin Ford"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Victor Mu\u00f1oz"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1010","time_end":"","time_stamp":"","time_start":"","title":"Implementing the Solution Framework in a Social Impact Project","uid":"w-eduvis-1010","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1013":{"abstract":"Academic advising can positively impact struggling students' success. We developed AdVizor, a data-driven learning analytics tool for academic risk prediction for advisors. Our system is equipped with a random forest model for grade prediction probabilities uses a visualization dashboard to allows advisors to interpret model predictions. We evaluated our system in mock advising sessions with academic advisors and undergraduate students at our university. Results show that the system can easily integrate into the existing advising workflow, and visualizations of model outputs can be learned through short training sessions. AdVizor supports and complements the existing expertise of the advisor while helping to facilitate advisor-student discussion and analysis. Advisors found the system assisted them in guiding student course selection for the upcoming semester. It allowed them to guide students to prioritize the most critical and impactful courses. Both advisors and students perceived the system positively and were interested in using the system in the future. Our results encourage the development of intelligent advising systems in higher education, catered for advisors.","accessible_pdf":false,"authors":[{"affiliations":["Ontario Tech University, Oshawa, Canada"],"email":"riley.weagant@ontariotechu.net","is_corresponding":false,"name":"Riley Weagant"},{"affiliations":["Ontario Tech University, Oshawa, Canada"],"email":"zixin.zhao@ontariotechu.net","is_corresponding":true,"name":"Zixin Zhao"},{"affiliations":["Ontario Tech University, Oshawa, Canada"],"email":"abradley@uncharted.software","is_corresponding":false,"name":"Adam Badley"},{"affiliations":["Ontario Tech University, Oshawa, Canada"],"email":"christopher.collins@ontariotechu.ca","is_corresponding":false,"name":"Christopher Collins"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zixin Zhao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1013","time_end":"","time_stamp":"","time_start":"","title":"AdVizor: Using Visual Explanations to Guide Data-Driven Student Advising","uid":"w-eduvis-1013","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1015":{"abstract":"The integration of visualization in computing education has emerged as a promising strategy to enhance student understanding and engagement in complex computing concepts. Motivated by the need to explore effective teaching methods, this research systematically reviews the applications of visualization tools in computing education, aiming to identify gaps and opportunities for future research. We conducted a systematic literature review using papers from Semantic Scholar and Web of Science, and using a refined set of keywords to gather relevant studies. Our search yielded 288 results, which were systematically filtered to include 90 papers. Data extraction focused on publication details, research methods, key findings, future research suggestions, and research categories. Our review identified a diverse range of visualization tools and techniques used across different areas of computing education, including algorithms, programming, online learning, and problem-solving. The findings highlight the effectiveness of these tools in improving student engagement, understanding, and learning outcomes. However, there is a need for rigorous evaluations and the development of new models tailored to specific learning difficulties. By identifying effective visualization techniques and areas for further investigation, this review encourages the continued development and integration of visual tools in computing education to support the advancement of teaching methodologies","accessible_pdf":false,"authors":[{"affiliations":["University of Toronto, Toronto, Canada"],"email":"naaz.sibia@utoronto.ca","is_corresponding":true,"name":"Naaz Sibia"},{"affiliations":["University of Toronto Mississauga, Mississauga, Canada"],"email":"michael.liut@utoronto.ca","is_corresponding":false,"name":"Michael Liut"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"cnobre@cs.toronto.edu","is_corresponding":false,"name":"Carolina Nobre"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Naaz Sibia"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1015","time_end":"","time_stamp":"","time_start":"","title":"Exploring the Role of Visualization in Enhancing Computing Education: A Systematic Literature Review","uid":"w-eduvis-1015","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1017":{"abstract":"The digitalisation of organisations has transformed the way organisations view data. All employees are expected to be data literate and managers are expected to make data-driven decisions [1]. The ability to analyse and visualize the data is a crucial skill set expected from every decision-maker. To help managers develop the skill of data visualization, business schools across the world offer courses in data visualization. From an educator\u2019s perspective, one key decision that he/she must take while designing a visualization course for management students is the software tool to use in the course. Existing literature on data visualization in the scientific community is primarily focused on tools used by researchers or computer scientists ([3], [4]). In [5] the authors evaluate the landscape of commercially available visual analytics systems. In business-related publications like Harvard Business Review, the focus is more on selecting the right chart or on designing effective visualization ([6], [7]). There is a lack of literature to guide educators in teaching visualization to management students. This article attempts to guide educators teaching visualization to management students on how to select the appropriate software tool for their course.","accessible_pdf":false,"authors":[{"affiliations":["Indian institute of management indore, Indore, India"],"email":"sanjogr@iimidr.ac.in","is_corresponding":true,"name":"Sanjog Ray"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sanjog Ray"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1017","time_end":"","time_stamp":"","time_start":"","title":"Visualization Software: How to Select the Right Software for Teaching Visualization.","uid":"w-eduvis-1017","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1018":{"abstract":"In this article, we discuss an experience with design and situated learning in the Creative Data Visualization course, part of the Visual Communication Design undergraduate program at the Federal University of Rio de Janeiro, a free, public Brazilian university that, thanks to affirmative action policies, has become more inclusive over the years. We begin with a brief introduction to the terms Situated Knowledge, coined by Donna Haraway, Situated Design, based on the former concept, and Situated Learning. We then examine the similarities and differences between these notions and the term Situated Visualization to present a model for the concept of Situated Learning in Information Visualization. Following this foundation, we describe the applied methodology, emphasizing the importance of integrating real-world contexts into students\u2019 projects. As a case study, we present three student projects produced as final assignments for the course. Through this article, we aim to underscore the articulation of situated design concepts in information visualization activities and contribute to teaching and learning practices in this field, particularly within the Global South.","accessible_pdf":false,"authors":[{"affiliations":["Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil"],"email":"doriskos@eba.ufrj.br","is_corresponding":true,"name":"Doris Kosminsky"},{"affiliations":["Federal University of Rio de Janeiro, Rio de Janeiro, Brazil"],"email":"renata.perim@ufrj.br","is_corresponding":false,"name":"Renata Perim Lopes"},{"affiliations":["UFRJ, RJ, Brazil","IBGE, RJ, Brazil"],"email":"regina.reznik@ufrj.br","is_corresponding":false,"name":"Regina Reznik"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Doris Kosminsky"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1018","time_end":"","time_stamp":"","time_start":"","title":"Teaching Information Visualization through Situated Design: Case Studies from the Classroom","uid":"w-eduvis-1018","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1019":{"abstract":"The integration of data visualization in journalism has catalyzed the growth of data storytelling in recent years. Today, it is increasingly common for journalism schools to incorporate data visualization into their curricula. However, the approach to teaching data visualization in journalism schools can diverge significantly from that in computer science or design schools, influenced by the varied backgrounds of students and the distinct value systems inherent to these disciplines. This paper reviews my experience and reflections on teaching data visualization in a journalism school. First, I discuss the prominent characteristics of journalism education that pose challenges for course design and teaching. Then, I share firsthand teaching experiences related to each characteristic and recommend approaches for effective teaching.","accessible_pdf":false,"authors":[{"affiliations":["Fudan University, Shanghai, China"],"email":"xingyulan96@gmail.com","is_corresponding":true,"name":"Xingyu Lan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xingyu Lan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1019","time_end":"","time_stamp":"","time_start":"","title":"Reflections on Teaching Data Visualization at the Journalism School","uid":"w-eduvis-1019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1020":{"abstract":"In this paper, we discuss our experiences advancing a professional-oriented graduate program in Cartography & GIScience at the University of Wisconsin-Madison to account for fundamental shifts in conceptual framings, rapidly evolving mapping technologies, and diverse student needs. We focus our attention on considerations for the cartography curriculum given its relevance to (geo)visualization education and map literacy. We reflect on challenges associated with, and lessons learned from, developing a comprehensive and cohesive cartography curriculum across in-person and online learning modalities for a wide range of professional student audiences.","accessible_pdf":false,"authors":[{"affiliations":["University of Wisconsin-Madison, Madison, United States"],"email":"jknelson3@wisc.edu","is_corresponding":true,"name":"Jonathan Nelson"},{"affiliations":["University of Wisconsin-Madison, Madison, United States"],"email":"limpisathian@wisc.edu","is_corresponding":false,"name":"P. William Limpisathian"},{"affiliations":["University of Wisconsin-Madison, Madison, United States"],"email":"reroth@wisc.edu","is_corresponding":false,"name":"Robert Roth"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jonathan Nelson"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1020","time_end":"","time_stamp":"","time_start":"","title":"Developing a Robust Cartography Curriculum to Train the Professional Cartographer","uid":"w-eduvis-1020","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1026":{"abstract":"For over half a century, science centers have been key in communicating science, aiming to increase interest and curiosity in STEM, and promote lifelong learning. Science centers integrate interactive technologies like dome displays, touch tables, VR and AR for immersive learning. Visitors can explore complex phenomena, such as conducting a virtual autopsy. Also, the shift towards digitally interactive exhibits has expanded science centers beyond physical locations to virtual spaces, extending their reach into classrooms. Our investigation revealed several key factors for impactful school visits involving interactive data visualization such as full-dome movies, provide unique perspectives about vast and microscopic phenomena. Hands-on discovery allows pupils to manipulate and investigate data, leading to deeper engagement. Collaborative interaction fosters active learning through group participation. Additionally, clear curriculum connections ensure that visits are pedagogically meaningful. We propose a three-stage model for school visits. The \"Experience\" stage involves immersive visual experiences to spark interest. The \"Engagement\" stage builds on this by providing hands-on interaction with data visualization exhibits. The \"Applicate\" stage offers opportunities to apply and create using data visualization. A future goal of the model is to broaden STEM reach, enabling pupils to benefit from data visualization experiences even if they cannot visit centers.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping university, Norrk\u00f6ping, Sweden"],"email":"andreas.c.goransson@liu.se","is_corresponding":true,"name":"Andreas G\u00f6ransson"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Andreas G\u00f6ransson"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1026","time_end":"","time_stamp":"","time_start":"","title":"What makes school visits to digital science centers successful?","uid":"w-eduvis-1026","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1027":{"abstract":"Parallel coordinate plots (PCPs) are gaining popularity in data exploration, statistical analysis, predictive analysis along with for data-driven storytelling. In this paper, we present the results of a post-hoc analysis of a dataset from a PCP literacy intervention to identify barriers to PCP literacy. We analyzed question responses and inductively identified barriers to PCP literacy. We performed group coding on each individual response and identified new barriers to PCP literacy. Based on our analysis, we present a extended and enhanced list of barriers to PCP literacy. Our findings have implications towards educational interventions targeting PCP literacy and can provide an approach for students to learn about PCPs through active learning.","accessible_pdf":false,"authors":[{"affiliations":["University of San Francisco, San Francisco, United States"],"email":"csrinivas2@dons.usfca.edu","is_corresponding":false,"name":"Chandana Srinivas"},{"affiliations":["Cukurova University, Adana, Turkey"],"email":"elifemelfirat@gmail.com","is_corresponding":false,"name":"Elif E. Firat"},{"affiliations":["University of Nottingham, Nottingham, United Kingdom"],"email":"robert.laramee@nottingham.ac.uk","is_corresponding":false,"name":"Robert S. Laramee"},{"affiliations":["University of San Francisco, San Francisco, United States"],"email":"apjoshi@usfca.edu","is_corresponding":true,"name":"Alark Joshi"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alark Joshi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1027","time_end":"","time_stamp":"","time_start":"","title":"An Inductive Approach for Identification of Barriers to PCP Literacy","uid":"w-eduvis-1027","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1028":{"abstract":"With the decreasing cost of consumer display technologies making it easier for universities to have larger displays in classrooms, and the ubiquitous use of online tools such as collaborative whiteboards for remote learning during the COVID-19 pandemic, combining the two can be useful in higher education. This is especially true in visually intensive classes, such as data visualization courses, that can benefit from additional \"space to teach,\" coined after the \"space to think\" sense-making idiom. In this paper, we reflect on our approach to using SAGE3, a collaborative whiteboard with advanced features, in higher education to teach visually intensive classes, provide examples of activities from our own visually-intensive courses, and present student feedback. We gather our observations into usage patterns for using content-rich canvases in education.","accessible_pdf":false,"authors":[{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"jessemh@vt.edu","is_corresponding":true,"name":"Jesse Harden"},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"nuritk@hawaii.edu","is_corresponding":false,"name":"Nurit Kirshenbaum"},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"tabalbar@hawaii.edu","is_corresponding":false,"name":"Roderick S Tabalba Jr."},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"rtheriot@hawaii.edu","is_corresponding":false,"name":"Ryan Theriot"},{"affiliations":["The University of Hawai'i at M\u0101noa, Honolulu, United States"],"email":"mlr2010@hawaii.edu","is_corresponding":false,"name":"Michael L. Rogers"},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"mahdi@hawaii.edu","is_corresponding":false,"name":"Mahdi Belcaid"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"},{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"renambot@uic.edu","is_corresponding":false,"name":"Luc Renambot"},{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"llong4@uic.edu","is_corresponding":false,"name":"Lance Long"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"ajohnson@uic.edu","is_corresponding":false,"name":"Andrew E Johnson"},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"leighj@hawaii.edu","is_corresponding":false,"name":"Jason Leigh"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jesse Harden"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1028","time_end":"","time_stamp":"","time_start":"","title":"Space to Teach: Content-Rich Canvases for Visually-Intensive Education","uid":"w-eduvis-1028","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1029":{"abstract":"Data-art blends visualisation, data science, and artistic expression. It allows people to transform information and data into exciting and interesting visual narratives. Hosting a public data-art hands-on workshop enables participants to engage with data and learn fundamental visualisation techniques. However, being a public event, it presents a range of challenges. We outline our approach to organising and conducting a public workshop, that caters to a wide age range, from children to adults. We divide the tutorial into three sections, focusing on data, sketching skills and visualisation. We place emphasis on public engagement, and ensure that participants have fun while learning new skills.","accessible_pdf":false,"authors":[{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"j.c.roberts@bangor.ac.uk","is_corresponding":true,"name":"Jonathan C Roberts"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jonathan C Roberts"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1029","time_end":"","time_stamp":"","time_start":"","title":"Engaging Data-Art: Conducting a Public Hands-On Workshop","uid":"w-eduvis-1029","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1030":{"abstract":"We propose to leverage the recent development in Large Language Models, in combination to data visualization software and devices in science centers and schools in order to foster more personalized learning experiences. The main goal with our endeavour is to provide to pupils and visitors the same experience they would get with a professional facilitator when interacting with data visualizations of complex scientific phenomena. We describe the results from our early prototypes and the intended implementation and testing of our idea.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":true,"name":"Lonni Besan\u00e7on"},{"affiliations":["LiU Link\u00f6ping Universitet, Norrk\u00f6ping, Sweden"],"email":"mathis.brossier@liu.se","is_corresponding":false,"name":"Mathis Brossier"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"omar.mena@kaust.edu.sa","is_corresponding":false,"name":"Omar Mena"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"erik.sunden@liu.se","is_corresponding":false,"name":"Erik Sund\u00e9n"},{"affiliations":["Link\u00f6ping university, Norrk\u00f6ping, Sweden"],"email":"andreas.c.goransson@liu.se","is_corresponding":false,"name":"Andreas G\u00f6ransson"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"anders.ynnerman@liu.se","is_corresponding":false,"name":"Anders Ynnerman"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lonni Besan\u00e7on"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1030","time_end":"","time_stamp":"","time_start":"","title":"TellUs \u2013 Leveraging the power of LLMs with visualization to benefit science centers.","uid":"w-eduvis-1030","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-eduvis-1031":{"abstract":"In this reflective essay, we explore how educational science can be relevant for visualization research, addressing beneficial intersections between the two communities. While visualization has become integral to various areas, including education, our own ongoing collaboration has induced reflections and discussions we believe could benefit visualization research. In particular, we identify five key perspectives: surpassing traditional evaluation metrics by incorporating established educational measures; defining constructs based on existing learning and educational research frameworks; applying established cognitive theories to understand interpretation and interaction with visualizations; establishing uniform terminology across disciplines; and, fostering interdisciplinary convergence. We argue that by integrating educational research constructs, methodologies, and theories, visualization research can further pursue ecological validity and thereby improve the design and evaluation of visual tools. Our essay emphasizes the potential of intensified and systematic collaborations between educational scientists and visualization researchers to advance both fields, and in doing so craft visualization systems that support comprehension, retention, transfer, and critical thinking. We argue that this reflective essay serves as a first point of departure for initiating dialogue that, we hope, could help further connect educational science and visualization, by proposing future empirical studies that take advantage of interdisciplinary approaches of mutual gain to both communities.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":true,"name":"Lonni Besan\u00e7on"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lonni Besan\u00e7on"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1031","time_end":"","time_stamp":"","time_start":"","title":"What Can Educational Science Offer Visualization? A Reflective Essay","uid":"w-eduvis-1031","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-energyvis-1762":{"abstract":"Weather can have a significant impact on the power grid. Heat and cold waves lead to increased energy use as customers cool or heat their space, while simultaneously hampering energy production as the environment deviates from ideal operating conditions. Extreme heat has previously melted power cables, while extreme cold can cause vital parts of the energy infrastructure to freeze. Utilities have reserves to compensate for the additional energy use, but in extreme cases which fall outside the forecast energy demand, the impact on the power grid can be severe. In this paper, we present an interactive tool to explore the relationship between weather and power outages. We demonstrate its use with the example of Winter Storm Uri\u2019s impact on Texas in February 2021.","accessible_pdf":false,"authors":[{"affiliations":["Institute of Computer Science, Leipzig University, Leipzig, Germany"],"email":"nsonga@informatik.uni-leipzig.de","is_corresponding":true,"name":"Baldwin Nsonga"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"andy.berres@gmail.com","is_corresponding":false,"name":"Andy S Berres"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"bobby.jeffers@nrel.gov","is_corresponding":false,"name":"Robert Jeffers"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"caitlyn.clark6@icloud.com","is_corresponding":false,"name":"Caitlyn Clark"},{"affiliations":["University of Kaiserslautern, Kaiserslautern, Germany"],"email":"hagen@cs.uni-kl.de","is_corresponding":false,"name":"Hans Hagen"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"scheuermann@informatik.uni-leipzig.de","is_corresponding":false,"name":"Gerik Scheuermann"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Baldwin Nsonga"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-1762","time_end":"","time_stamp":"","time_start":"","title":"Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri","uid":"w-energyvis-1762","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-energyvis-2646":{"abstract":"With the growing penetration of inverter-based distributed energy resources and increased loads through electrification, power systems analyses are becoming more important and more complex. Moreover, these analyses increasingly involve the combination of interconnected energy domains with data that are spatially and temporally increasing in scale by orders of magnitude, surpassing the capabilities of many existing analysis and decision-support systems. We present the architectural design, development, and application of a high-resolution web-based visualization environment capable of cross-domain analysis of tens of millions of energy assets, focusing on scalability and performance. Our system supports the exploration, navigation, and analysis of large data from diverse domains such as electrical transmission and distribution systems, mobility and electric vehicle charging networks, communications networks, cyber assets, and other supporting infrastructure. We evaluate this system across multiple use cases, describing the capabilities and limitations of a web-based approach for high-resolution energy system visualizations.","accessible_pdf":false,"authors":[{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"graham.johnson@nrel.gov","is_corresponding":false,"name":"Graham Johnson"},{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"sam.molnar@nrel.gov","is_corresponding":false,"name":"Sam Molnar"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"nicholas.brunhart-lupo@nrel.gov","is_corresponding":false,"name":"Nicholas Brunhart-Lupo"},{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"kenny.gruchalla@nrel.gov","is_corresponding":true,"name":"Kenny Gruchalla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kenny Gruchalla"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-2646","time_end":"","time_stamp":"","time_start":"","title":"Architecture for Web-Based Visualization of Large-Scale Energy Domains","uid":"w-energyvis-2646","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-energyvis-2743":{"abstract":"In the pursuit of achieving net-zero greenhouse gas emissions by 2050, policymakers and researchers require sophisticated tools to explore and compare various climate transition scenarios. This paper introduces the Pathways Explorer, an innovative visualization tool designed to facilitate these comparisons by providing an interactive platform that allows users to select, view, and dissect multiple pathways towards sustainability. Developed in collaboration with the \u201cInstitut de l\u2019\u00e9nergie Trottier\u201d (IET), this tool leverages a technoeconomic optimization model to project the energy transformation needed under different constraints and assumptions. We detail the design process that guided the development of the Pathways Explorer, focusing on user-centered design challenges and requirements. A case study is presented to demonstrate how the tool has been utilized by stakeholders to make informed decisions, highlighting its impact and effectiveness. The Pathways Explorer not only enhances understanding of complex climate data but also supports strategic planning by providing clear, comparative visualizations of potential future scenarios.","accessible_pdf":false,"authors":[{"affiliations":["Kashika Studio, Montreal, Canada"],"email":"francois.levesque@polymtl.ca","is_corresponding":false,"name":"Fran\u00e7ois L\u00e9vesque"},{"affiliations":["Polytechnique Montreal, Montreal, Canada"],"email":"louis.beaumier@polymtl.ca","is_corresponding":false,"name":"Louis Beaumier"},{"affiliations":["Polytechnique Montreal, Montreal, Canada"],"email":"thomas.hurtut@polymtl.ca","is_corresponding":true,"name":"Thomas Hurtut"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Thomas Hurtut"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-2743","time_end":"","time_stamp":"","time_start":"","title":"Pathways Explorer: Interactive Visualization of Climate Transition Scenarios","uid":"w-energyvis-2743","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-energyvis-2845":{"abstract":"Methane (CH4) leakage monitoring is crucial for environmental protection and regulatory compliance, particularly in the oil and gas industries. Reducing CH4 emissions helps advance green energy by converting it into a valuable energy source through innovative capture technologies. A real-time continuous monitoring system (CMS) is necessary to detect fugitive and intermittent emissions and provide actionable insights. Integrating spatiotemporal data from satellites, airborne sensors, and ground sensors with inventory data and the weather research and forecasting (WRF) model creates a comprehensive dataset, making CMS feasible but posing significant challenges. These challenges include data alignment and fusion, managing heterogeneity, handling missing values, ensuring resolution integrity, and maintaining geometric and radiometric accuracy. This study outlines the procedure for methane leakage detection, addressing challenges at each step and offering solutions through machine learning and data analysis. It further details how visual analytics can be implemented to improve the effectiveness of the various aspects of emission monitoring.","accessible_pdf":false,"authors":[{"affiliations":["University of Oklahoma, Norman, United States"],"email":"parisa.masnadi@ou.edu","is_corresponding":true,"name":"Parisa Masnadi Khiabani"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"danala@ou.edu","is_corresponding":false,"name":"Gopichandh Danala"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"wolfgang.jentner@uni-konstanz.de","is_corresponding":false,"name":"Wolfgang Jentner"},{"affiliations":["University of Oklahoma, Oklahoma, United States"],"email":"ebert@ou.edu","is_corresponding":false,"name":"David Ebert"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Parisa Masnadi Khiabani"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-2845","time_end":"","time_stamp":"","time_start":"","title":"Challenges in Data Integration, Monitoring, and Exploration of Methane Emissions: The Role of Data Analysis and Visualization","uid":"w-energyvis-2845","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-energyvis-3496":{"abstract":"Transmission System Operators (TSO) often need to integrate multiple sources of information to make decisions in real time. In cases where a single power line goes offline, due to a natural event or scheduled outage, there typically will be a contingency plan that the TSO may utilize to mitigate the situation. In cases where two or more power lines go offline, this contingency plan is no longer valid, and they must re-prepare and reason about the network in real time. A key network property that must be balanced is loadability--the range of permissible voltage levels for a specific bus (or node), understood as a function of power and its active (P) and reactive (Q) components. Loadability provides information of how much more demand a specific node can handle, before system became unstable. To increase loadability, the TSO can potentially make control actions that raise or lower P or Q, which results in change the voltage levels required to be within permissible limits. While many methods exist to calculate loadability and represent loadability to end users, there has been little focus on tailoring loadability visualizations to the unique needs of TSOs. In this paper we involve operations domain experts in a human centered design process to prototype two new loadability visualizations for TSOs. We contribute a design paper that yields: (1) a working model of the operator's decision making process, (2) example artifacts of the two data visualization techniques, and (3) a critical qualitative expert review of our designs.","accessible_pdf":false,"authors":[{"affiliations":["Hitachi Energy Research, Montreal, Canada"],"email":"dmarino@cim.mcgill.ca","is_corresponding":true,"name":"David Marino"},{"affiliations":["Carleton University, Ottawa, Canada"],"email":"maxwellkeleher@cmail.carleton.ca","is_corresponding":false,"name":"Maxwell Keleher"},{"affiliations":["Hitachi Energy Research, Krakow, Poland"],"email":"krzysztof.chmielowiec@hitachienergy.com","is_corresponding":false,"name":"Krzysztof Chmielowiec"},{"affiliations":["Hitachi Energy Research, Montreal, Canada"],"email":"antony.hilliard@hitachienergy.com","is_corresponding":false,"name":"Antony Hilliard"},{"affiliations":["Hitachi Energy Research, Krakow, Poland"],"email":"pawel.dawidowski@hitachienergy.com","is_corresponding":false,"name":"Pawel Dawidowski"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["David Marino"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-3496","time_end":"","time_stamp":"","time_start":"","title":"Operator-Centered Design of a Nodal Loadability Network Visualization","uid":"w-energyvis-3496","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-energyvis-4332":{"abstract":"The rapid growth of the solar energy industry requires advanced educational tools to train the next generation of engineers and technicians. We present a novel system for situated visualization of photovoltaic (PV) module performance, leveraging a combination of PV simulation, sun-sky position, and head-mounted augmented reality (AR). Our system is guided by four principles of development: simplicity, adaptability, collaboration, and maintainability, realized in six components. Users interactively manipulate a physical module's orientation and shading referents with immediate feedback on the module's performance.","accessible_pdf":false,"authors":[{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"nicholas.brunhart-lupo@nrel.gov","is_corresponding":true,"name":"Nicholas Brunhart-Lupo"},{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"kenny.gruchalla@nrel.gov","is_corresponding":false,"name":"Kenny Gruchalla"},{"affiliations":["Fort Lewis College, Durango, United States"],"email":"williams_l@fortlewis.edu","is_corresponding":false,"name":"Laurie Williams"},{"affiliations":["Fort Lewis College, Durango, United States"],"email":"selias@fortlewis.edu","is_corresponding":false,"name":"Steve Ellis"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicholas Brunhart-Lupo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-4332","time_end":"","time_stamp":"","time_start":"","title":"Situated Visualization of Photovoltaic Module Performance for Workforce Development","uid":"w-energyvis-4332","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-energyvis-6102":{"abstract":"This paper introduces CPIE (Coal Pollution Impact Explorer), a spatiotemporal visual analytic tool developed for interactive visualization of coal pollution impacts. CPIE visualizes electricity-generating units (EGUs) and their contributions to statewide Medicare deaths related to coal PM2.5 emissions. The tool is designed to make scientific findings on the impacts of coal pollution more accessible to the general public and to raise awareness of the associated health risks. We present three use cases for CPIE: 1) the overall spatial distribution of all 480 facilities in the United States, their statewide impact on excess deaths, and the overall decreasing trend in deaths associated with coal pollution from 1999 to 2020; 2) the influence of pollution transport, where most deaths associated with the facilities located within the same state and neighboring states but some deaths occur far away; and 3) the effectiveness of intervention regulations, such as installing emissions control devices and shutting down coal facilities, in significantly reducing the number of deaths associated with coal pollution.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"sjin86@gatech.edu","is_corresponding":true,"name":"Sichen Jin"},{"affiliations":["George Mason University, Fairfax, United States"],"email":"lhennem@gmu.edu","is_corresponding":false,"name":"Lucas Henneman"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"jessica.roberts@cc.gatech.edu","is_corresponding":false,"name":"Jessica Roberts"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sichen Jin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-6102","time_end":"","time_stamp":"","time_start":"","title":"CPIE: A Spatiotemporal Visual Analytic Tool to Explore the Impact of Coal Pollution","uid":"w-energyvis-6102","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-energyvis-9750":{"abstract":"This paper presents a novel open system, ChatGrid, for easy, intuitive, and interactive geospatial visualization of large-scale transmission networks. ChatGrid uses state-of-the-art techniques for geospatial visualization of large networks, including 2.5D map views, animated flows, hierarchical and level-based filtering and aggregation to provide visual information in an easy, cognitive manner. The highlight of ChatGrid is a natural language query based interface powered by a large language model (ChatGPT) that offers a natural and flexible interactive experience whereby users can ask questions and ChatGrid provides responses both in text and visually. This paper discusses the architecture, implementation, design decisions, and usage of large language models for ChatGrid.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"sjin86@gatech.edu","is_corresponding":true,"name":"Sichen Jin"},{"affiliations":["Pacific Northwest National Laboratory, Richland, United States"],"email":"shrirang.abhyankar@pnnl.gov","is_corresponding":false,"name":"Shrirang Abhyankar"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sichen Jin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-9750","time_end":"","time_stamp":"","time_start":"","title":"ChatGrid: Power Grid Visualization Empowered by a Large Language Model","uid":"w-energyvis-9750","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-future-1007":{"abstract":"Data physicalizations are a time-tested practice for visualizing data, but the sustainability challenges of current physicalization practices have only recently been explored; for example, the usage of carbon-intensive, non-renewable materials like plastic and metal. This work explores clay physicalizations as an approach to these challenges. Using a three-stage process, we investigate the design and sustainability of clay 3D printed physicalizations: 1) exploring the properties and constraints of clay when extruded through a 3D printer, 2) testing a variety of data encodings that work within the constraints, and 3) introducing Rain Gauge, a clay physicalization exploring climate effects on climate data with an impermanent material. Throughout our process, we investigate the material circularity of clay-based digital fabrication by reclaiming and reusing the clay stock in each stage. Finally, we reflect on the implications of ceramic 3D printing for data physicalization through the lenses of practicality and sustainability.","accessible_pdf":false,"authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"bridger.g.herman@gmail.com","is_corresponding":true,"name":"Bridger Herman"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"jlrossi@umn.edu","is_corresponding":false,"name":"Jessica Rossi-Mastracci"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"will1070@umn.edu","is_corresponding":false,"name":"Heather Willy"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"mreicher@umn.edu","is_corresponding":false,"name":"Molly Reichert"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"dfk@umn.edu","is_corresponding":false,"name":"Daniel F. Keefe"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Bridger Herman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-future0","slot_id":"w-future-1007","time_end":"","time_stamp":"","time_start":"","title":"Rain Gauge: Exploring the Design and Sustainability of 3D Printed Clay Physicalizations","uid":"w-future-1007","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-future-1008":{"abstract":"We explain our model of data-in-a-void and contrast it with the idea of data-voids to explore how the different framings impact our thinking on sustainability. This contrast supports our assertion that how we think about the data that we work with for visualization design impacts the direction of our thinking and our work. To show this we describe how we view the concept of data-in-a-void as different from that of data-voids. Then we provide two examples, one that relates to existing data about bicycle mobility, and one about non-data for local food production. In the discussion, we then untangle and outline how our thinking about data for sustainability is impacted and influenced by the data-in-a-void model.","accessible_pdf":false,"authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"karly.ross@ucalgary.ca","is_corresponding":true,"name":"Karly Ross"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"pratim.sengupta@ucalgary.ca","is_corresponding":false,"name":"Pratim Sengupta"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"wj@wjwillett.net","is_corresponding":false,"name":"Wesley Willett"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Karly Ross"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-future0","slot_id":"w-future-1008","time_end":"","time_stamp":"","time_start":"","title":"(Almost) All Data is Absent Data","uid":"w-future-1008","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-future-1011":{"abstract":"This study explores energy issues across various nations, focusing on sustainable energy availability and accessibility. Representatives from all continents were selected based on their HDI values. Data from Kaggle, spanning 2000-2020, was analyzed using Python to address questions on electricity access, renewable energy generation, and fossil fuel consumption. The research employed statistical and data visualization techniques to reveal trends and disparities. Findings underscore the importance of Python and Kaggle in data analysis. The study suggests expanding datasets and incorporating predictive modeling for future research to enhance understanding and decision-making in energy policies.","accessible_pdf":false,"authors":[{"affiliations":["Faculdade Nova Roma, Recife, Brazil"],"email":"gustavodssilva456@gmail.com","is_corresponding":true,"name":"Gustavo Santos Silva"},{"affiliations":["Faculdade Nova Roma, Recife, Brazil"],"email":"lartur671@gmail.com","is_corresponding":false,"name":"Artur Vin\u00edcius Lima Silva"},{"affiliations":["Faculdade Nova Roma, Recife, Brazil"],"email":"lpsouza612@gmail.com","is_corresponding":false,"name":"Lucas Pereira Souza"},{"affiliations":["Faculdade Nova Roma, Recife, Brazil"],"email":"adrianlauzid@gmail.com","is_corresponding":false,"name":"Adrian Lauzid"},{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"djmm@cin.ufpe.br","is_corresponding":false,"name":"Davi Maia"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gustavo Santos Silva"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-future0","slot_id":"w-future-1011","time_end":"","time_stamp":"","time_start":"","title":"Renewable Energy Data Visualization: A study with Open Data","uid":"w-future-1011","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-future-1012":{"abstract":"Information visualization holds significant potential to support sustainability goals such as environmental stewardship, and climate resilience by transforming complex data into accessible visual formats that enhance public understanding of complex climate change data and drive actionable insights. While the field has predominantly focused on analytical orientation of visualization, challenging traditional visualization techniques and goals, through ``critical visualization'' research expands existing assumptions and conventions in the field. In this paper, I explore how reimagining overlooked aspects of data visualization\u2014such as engagement, emotional resonance, communication, and community empowerment\u2014can contribute to achieving sustainability objectives. I argue that by focusing on inclusive data visualization that promotes clarity, understandability, and public participation, we can make complex data more relatable and actionable, fostering broader connections and mobilizing collective action on critical issues like climate change. Moreover, I discuss the role of emotional receptivity in environmental data communication, stressing the need for visualizations that respect diverse cultural perspectives and emotional responses to achieve impactful outcomes. Drawing on insights from a decade of research in public participation and community engagement, I aim to highlight how data visualization can democratize data access and increase public involvement in order to contribute to a more sustainable and resilient future.","accessible_pdf":false,"authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"nmahyar@cs.umass.edu","is_corresponding":true,"name":"Narges Mahyar"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Narges Mahyar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-future0","slot_id":"w-future-1012","time_end":"","time_stamp":"","time_start":"","title":"Reimagining Data Visualization to Address Sustainability Goals","uid":"w-future-1012","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-future-1013":{"abstract":"This position paper discusses the role of data visualizations in journalism based on new areas of study such as visual journalism and data journalism, using examples from the coverage of the catastrophe that occurred in 2024 in Rio Grande do Sul, Brazil, affecting over 2 million people. This case served as a warning to the country about the importance of the climate change agenda and its consequences. The paper includes a literature review in the fields of journalism, data visualization, and psychology to explore the importance of data visualization in combating misinformation and in producing more reliable journalism as tool for fighting climate change","accessible_pdf":false,"authors":[{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"emilly.brito@ufpe.br","is_corresponding":true,"name":"Emilly Brito"},{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"nivan@cin.ufpe.br","is_corresponding":false,"name":"Nivan Ferreira"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Emilly Brito"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-future0","slot_id":"w-future-1013","time_end":"","time_stamp":"","time_start":"","title":"Visual and Data Journalism as Tools for Fighting Climate Change","uid":"w-future-1013","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-nlviz-1004":{"abstract":"Large Language Models (LLMs) have been widely applied in summarization due to their speedy and high-quality text generation. Summarization for sensemaking involves information compression and insight extraction. Human guidance in sensemaking tasks can prioritize and cluster relevant information for LLMs. However, users must translate their cognitive thinking into natural language to communicate with LLMs. Can we use more readable and operable visual representations to guide the summarization process for sensemaking? Therefore, we propose introducing an intermediate step--a schematic visual workspace for human sensemaking--before the LLM generation to steer and refine the summarization process. We conduct a series of proof-of-concept experiments to investigate the potential for enhancing the summarization by GPT-4 through visual workspaces. Leveraging a textual sensemaking dataset with a ground truth summary, we evaluate the impact of a human-generated visual workspace on LLM-generated summarization of the dataset and assess the effectiveness of space-steered summarization. We categorize several types of extractable information from typical human workspaces that can be injected into engineered prompts to steer the LLM summarization. The results demonstrate how such workspaces can help align an LLM with the ground truth, leading to more accurate summarization results than without the workspaces.","accessible_pdf":false,"authors":[{"affiliations":["Computer Science Department, Blacksburg, United States"],"email":"tangxxwhu@gmail.com","is_corresponding":true,"name":"Xuxin Tang"},{"affiliations":["Dod, Laurel, United States"],"email":"ericpkrokos@gmail.com","is_corresponding":false,"name":"Eric Krokos"},{"affiliations":["Department of Defense, College Park, United States"],"email":"visual.tycho@gmail.com","is_corresponding":false,"name":"Kirsten Whitley"},{"affiliations":["City University of Hong Kong, Hong Kong, China"],"email":"canliu@cityu.edu.hk","is_corresponding":false,"name":"Can Liu"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"naren@cs.vt.edu","is_corresponding":false,"name":"Naren Ramakrishnan"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xuxin Tang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1004","time_end":"","time_stamp":"","time_start":"","title":"Steering LLM Summarization with Visual Workspaces for Sensemaking","uid":"w-nlviz-1004","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-nlviz-1007":{"abstract":"We explore the use of segmentation and summarization methods for the generation of real-time conversation topic timelines, in the context of glanceable Augmented Reality (AR) visualization. Conversation timelines may serve to summarize and contextualize conversations as they are happening, helping to keep conversations on track. Because dialogue and conversations are broad and unpredictable by nature, and our processing is being done in real-time, not all relevant information may be present in the text at the time it is processed. Thus, we present considerations and challenges which may not be as prevalent in traditional implementations of topic classification and dialogue segmentation. Furthermore, we discuss how AR visualization requirements and design practices require an additional layer of decision making, which must be factored directly into the text processing algorithms. We explore three segmentation strategies -- using dialogue segmentation based on the text of the entire conversation, segmenting on 1-minute intervals, and segmenting on 10-second intervals -- and discuss our results.","accessible_pdf":false,"authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"shanna.hollingwor1@ucalgary.ca","is_corresponding":true,"name":"Shanna Li Ching Hollingworth"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"wj@wjwillett.net","is_corresponding":false,"name":"Wesley Willett"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shanna Li Ching Hollingworth"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1007","time_end":"","time_stamp":"","time_start":"","title":"Towards Real-Time Speech Segmentation for Glanceable Conversation Visualization","uid":"w-nlviz-1007","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-nlviz-1008":{"abstract":"Academic literature reviews have traditionally relied on techniques such as keyword searches and accumulation of relevant back-references, using databases like Google Scholar or IEEEXplore. However, both the precision and accuracy of these search techniques is limited by the presence or absence of specific keywords, making literature review akin to searching for needles in a haystack. We present vitaLITy 2, a solution that uses a Large Language Model or LLM-based approach to identify semantically relevant literature in a textual embedding space. We include a corpus of 66,692 papers from 1970-2023 which are searchable through text embeddings created by three language models. vitaLITy 2 contributes a novel Retrieval Augmented Generation (RAG) architecture and can be interacted with through an LLM with augmented prompts, including summarization of a collection of papers. vitaLITy 2 also provides a chat interface that allow users to perform complex queries without learning any new programming language. This also enables users to take advantage of the knowledge captured in the LLM from its enormous training corpus. Finally, we demonstrate the applicability of vitaLITy 2 through two usage scenarios.","accessible_pdf":false,"authors":[{"affiliations":["University of Nottingham, Nottingham, United Kingdom"],"email":"psxah15@nottingham.ac.uk","is_corresponding":false,"name":"Hongye An"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"arpitnarechania@gatech.edu","is_corresponding":true,"name":"Arpit Narechania"},{"affiliations":["University of Nottingham, Nottingham, United Kingdom"],"email":"kai.xu@nottingham.ac.uk","is_corresponding":false,"name":"Kai Xu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arpit Narechania"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1008","time_end":"","time_stamp":"","time_start":"","title":"vitaLITy 2: Reviewing Academic Literature Using Large Language Models","uid":"w-nlviz-1008","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-nlviz-1009":{"abstract":"Analyzing and finding anomalies in multi-dimensional datasets is a cumbersome but vital task across different domains. In the context of financial fraud detection, analysts must quickly identify suspicious activity among transactional data. This is an iterative process made of complex exploratory tasks such as recognizing patterns, grouping, and comparing. To mitigate the information overload inherent to these steps, we present a tool combining automated information highlights, Large Language Model generated textual insights, and visual analytics, facilitating exploration at different levels of detail. We perform a segmentation of the data per analysis area and visually represent each one, making use of automated visual cues to signal which require more attention. Upon user selection of an area, our system provides textual and graphical summaries. The text, acting as a link between the high-level and detailed views of the chosen segment, allows for a quick understanding of relevant details. A thorough exploration of the data comprising the selection can be done through graphical representations. The feedback gathered in a study performed with seven domain experts suggests our tool effectively supports and guides exploratory analysis, easing the identification of suspicious information.","accessible_pdf":false,"authors":[{"affiliations":["Feedzai, Lisbon, Portugal"],"email":"beatriz.feliciano@feedzai.com","is_corresponding":true,"name":"Beatriz Feliciano"},{"affiliations":["Feedzai, Lisbon, Portugal"],"email":"rita.costa@feedzai.com","is_corresponding":false,"name":"Rita Costa"},{"affiliations":["Feedzai, Porto, Portugal"],"email":"jean.alves@feedzai.com","is_corresponding":false,"name":"Jean Alves"},{"affiliations":["Feedzai, Madrid, Spain"],"email":"javier.liebana@feedzai.com","is_corresponding":false,"name":"Javier Li\u00e9bana"},{"affiliations":["Feedzai, Lisbon, Portugal"],"email":"diogo.duarte@feedzai.com","is_corresponding":false,"name":"Diogo Ramalho Duarte"},{"affiliations":["Feedzai, Lisbon, Portugal"],"email":"pedro.bizarro@feedzai.com","is_corresponding":false,"name":"Pedro Bizarro"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Beatriz Feliciano"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1009","time_end":"","time_stamp":"","time_start":"","title":"\u201cShow Me What\u2019s Wrong!\u201d: Combining Charts and Text to Guide Data Analysis","uid":"w-nlviz-1009","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-nlviz-1010":{"abstract":"Dimension reduction (DR) can transform high-dimensional text embeddings into a 2D visual projection facilitating the exploration of document similarities. However, the projection often lacks connection to the text semantics, due to the opaque nature of text embeddings and non-linear dimension reductions. To address these problems, we propose a gradient-based method for visualizing the spatial semantics of dimensionally reduced text embeddings. This method employs gradients to assess the sensitivity of the projected documents with respect to the underlying words. The method can be applied to existing DR algorithms and text embedding models. Using these gradients, we designed a visualization system that incorporates spatial word clouds into the document projection space to illustrate the impactful text features. We further present three usage scenarios that demonstrate the practical applications of our system to facilitate the discovery and interpretation of underlying semantics in text projections.","accessible_pdf":false,"authors":[{"affiliations":["Computer Science, Virginia Tech, Blacksburg, United States"],"email":"wliu3@vt.edu","is_corresponding":false,"name":"Wei Liu"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":true,"name":"Rebecca Faust"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Rebecca Faust"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1010","time_end":"","time_stamp":"","time_start":"","title":"Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings","uid":"w-nlviz-1010","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-nlviz-1011":{"abstract":"Recently, large language models (LLMs) have shown great promise in translating natural language (NL) queries into visualizations, but their \u201cblack-box\u201d nature often limits explainability and debuggability. In response, we present a comprehensive text prompt that, given a tabular dataset and an NL query about the dataset, generates an analytic specification including (detected) data attributes, (inferred) analytic tasks, and (recommended) visualizations. This specification captures key aspects of the query translation process, affording both explainability and debuggability. For instance, it provides mappings from the detected entities to the corresponding phrases in the input query, as well as the specific visual design principles that determined the visualization recommendations. Moreover, unlike prior LLM-based approaches, our prompt supports conversational interaction and ambiguity detection capabilities. In this paper, we detail the iterative process of curating our prompt, present a preliminary performance evaluation using GPT-4, and discuss the strengths and limitations of LLMs at various stages of query translation.","accessible_pdf":false,"authors":[{"affiliations":["UNC Charlotte, Charlotte, United States"],"email":"ssah1@uncc.edu","is_corresponding":true,"name":"Subham Sah"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"rmitra34@gatech.edu","is_corresponding":false,"name":"Rishab Mitra"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"arpitnarechania@gatech.edu","is_corresponding":false,"name":"Arpit Narechania"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"endert@gatech.edu","is_corresponding":false,"name":"Alex Endert"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"john.stasko@cc.gatech.edu","is_corresponding":false,"name":"John Stasko"},{"affiliations":["UNC Charlotte, Charlotte, United States"],"email":"wdou1@uncc.edu","is_corresponding":false,"name":"Wenwen Dou"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Subham Sah"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1011","time_end":"","time_stamp":"","time_start":"","title":"Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models","uid":"w-nlviz-1011","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-nlviz-1016":{"abstract":"We explore how natural language authoring with large language models (LLMs) can support the inline authoring of word-scale visualizations (WSVs). While word-scale visualizations that live alongside and within document text can support rich integration of data into written narratives and communication, these small visualizations have typically been challenging to author. We explore how modern LLMs---which are able to generate diverse visualization designs based on simple natural language descriptions---might allow authors to specify and insert new visualizations inline as they write text. Drawing on our experiences with an initial prototype built using GPT-4, we highlight the expressive potential of inline natural language visualization authoring and identify opportunities for further research.","accessible_pdf":false,"authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"paige.sobrien@ucalgary.ca","is_corresponding":true,"name":"Paige So'Brien"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"wj@wjwillett.net","is_corresponding":false,"name":"Wesley Willett"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Paige So'Brien"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1016","time_end":"","time_stamp":"","time_start":"","title":"Towards Inline Natural Language Authoring for Word-Scale Visualizations","uid":"w-nlviz-1016","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-nlviz-1019":{"abstract":"As language models have become increasingly successful at a wide array of tasks, different prompt engineering methods have been developed alongside them in order to adapt these models to new tasks. One of them is Tree-of-Thoughts (ToT), a prompting strategy and framework for language model inference and problem-solving. It allows the model to explore multiple solution paths and select the best course of action, producing a tree-like structure of intermediate steps (i.e., thoughts). This method was shown to be effective for several problem types. However, the official implementation has a high barrier to usage as it requires setup overhead and incorporates task-specific problem templates which are difficult to generalize to new problem types. It also does not allow user interaction to improve or suggest new thoughts. We introduce iToT (interactive Tree-of- Thoughts), a generalized and interactive Tree of Thought prompting system. iToT allows users to explore each step of the model\u2019s problem-solving process as well as to correct and extend the model\u2019s thoughts. iToT revolves around a visual interface that facilitates simple and generic ToT usage and transparentizes the problem-solving process to users. This facilitates a better understanding of which thoughts and considerations lead to the model\u2019s final decision. Through two case studies, we demonstrate the usefulness of iToT in different human-LLM co-writing tasks.","accessible_pdf":false,"authors":[{"affiliations":["ETHZ, Zurich, Switzerland"],"email":"aboyle@student.ethz.ch","is_corresponding":false,"name":"Alan David Boyle"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"igupta@ethz.ch","is_corresponding":true,"name":"Isha Gupta"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"shoenig@student.ethz.ch","is_corresponding":false,"name":"Sebastian H\u00f6nig"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"lukas.mautner98@gmail.com","is_corresponding":false,"name":"Lukas Mautner"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"kenza.amara@ai.ethz.ch","is_corresponding":false,"name":"Kenza Amara"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"furui.cheng@inf.ethz.ch","is_corresponding":false,"name":"Furui Cheng"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"melassady@ai.ethz.ch","is_corresponding":false,"name":"Mennatallah El-Assady"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Isha Gupta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1019","time_end":"","time_stamp":"","time_start":"","title":"iToT: An Interactive System for Customized Tree-of-Thought Generation","uid":"w-nlviz-1019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-nlviz-1020":{"abstract":"Strategy management analyses are created by business consultants with common analysis frameworks (i.e. comparative analyses) and associated diagrams. We show these can be largely constructed using LLMs, starting with the extraction of insights from data, organization of those insights according to a strategy management framework, and then depiction in the typical strategy management diagram for that framework (static textual visualizations). We discuss caveats and future directions to generalize for broader uses.","accessible_pdf":false,"authors":[{"affiliations":["Uncharted Software, Toronto, Canada"],"email":"richard.brath@alumni.utoronto.ca","is_corresponding":true,"name":"Richard Brath"},{"affiliations":["Uncharted Software, Toronto, Canada"],"email":"miltonjbradley@gmail.com","is_corresponding":false,"name":"Adam James Bradley"},{"affiliations":["Uncharted Software, Toronto, Canada"],"email":"david@jonker.work","is_corresponding":false,"name":"David Jonker"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Richard Brath"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1020","time_end":"","time_stamp":"","time_start":"","title":"Strategic management analysis: from data to strategy diagram by LLM","uid":"w-nlviz-1020","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-nlviz-1021":{"abstract":"We present a mixed-methods study to explore how large language models (LLMs) can assist users in the visual exploration and analysis of complex data structures, using knowledge graphs (KGs) as a baseline. We surveyed and interviewed 20 professionals who regularly work with LLMs with the goal of using them for (or alongside) KGs. From the analysis of our interviews, we contribute a preliminary roadmap for the design of LLM-driven visual analysis systems and outline future opportunities in this emergent design space.","accessible_pdf":false,"authors":[{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"harry.li@ll.mit.edu","is_corresponding":true,"name":"Harry Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"ashley.suh@ll.mit.edu","is_corresponding":false,"name":"Ashley Suh"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Harry Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1021","time_end":"","time_stamp":"","time_start":"","title":"A Preliminary Roadmap for LLMs as Visual Data Analysis Assistants","uid":"w-nlviz-1021","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-nlviz-1022":{"abstract":"This study explores the potential of visual representation in understanding the structural elements of Arabic poetry, a subject of significant educational and research interest. Our objective is to make Arabic poetic works more accessible to readers of both Arabic and non-Arabic linguistic backgrounds by employing visualization, exploration, and analytical techniques. We transformed poetry texts into syllables, identified their metrical structures, segmented verses into patterns, and then converted these patterns into visual representations. Following this, we computed and visualized the dissimilarities between these images, and overlaid their differences. Our findings suggest that the positional patterns across a poem play a pivotal role in effective poetry clustering, as demonstrated by our newly computed metrics. The results of our clustering experiments showed a marked improvement over previous attempts, thereby providing new insights into the composition and structure of Arabic poetry. This study underscored the value of visual representation in enhancing our understanding of Arabic poetry.","accessible_pdf":false,"authors":[{"affiliations":["University of Neuch\u00e2tel, Neuch\u00e2tel, Switzerland"],"email":"abdelmalek.berkani@unine.ch","is_corresponding":true,"name":"Abdelmalek Berkani"},{"affiliations":["University of Neuch\u00e2tel, Neuch\u00e2tel, Switzerland"],"email":"adrian.holzer@unine.ch","is_corresponding":false,"name":"Adrian Holzer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Abdelmalek Berkani"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1022","time_end":"","time_stamp":"","time_start":"","title":"Enhancing Arabic Poetic Structure Analysis through Visualization","uid":"w-nlviz-1022","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-storygenai-5237":{"abstract":"Communicating data insights in an accessible and engaging manner to a broader audience remains a significant challenge. To address this problem, we introduce the Emoji Encoder, a tool that generates a set of emoji recommendations for the field and category names appearing in a tabular dataset. The selected set of emoji encodings can be used to generate configurable unit charts that combine plain text and emojis as word-scale graphics. These charts can serve to contrast values across multiple quantitative fields for each row in the data or to communicate trends over time. Any resulting chart is simply a block of text characters, meaning that it can be directly copied into a text message or posted on a communication platform such as Slack or Teams. This work represents a step toward our larger goal of developing novel, fun, and succinct data storytelling experiences that engage those who do not identify as data analysts. Emoji-based unit charts can offer contextual cues related to the data at the center of a conversation on platforms where emoji-rich communication is typical.","accessible_pdf":false,"authors":[{"affiliations":["University of Waterloo, Waterloo, Canada","Tableau Research, Seattle, United States"],"email":"mbrehmer@uwaterloo.ca","is_corresponding":true,"name":"Matthew Brehmer"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"},{"affiliations":["McGraw Hill, Seattle, United States","Tableau Software, Seattle, United States"],"email":"zoezoezoe.cc@gmail.com","is_corresponding":false,"name":"Zoe Zoe"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Matthew Brehmer"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-storygenai0","slot_id":"w-storygenai-5237","time_end":"","time_stamp":"","time_start":"","title":"The Data-Wink Ratio: Emoji Encoder for Generating Semantically-Resonant Unit Charts","uid":"w-storygenai-5237","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-storygenai-6168":{"abstract":"Data-driven storytelling serves as a crucial bridge for communicating ideas in a persuasive way. However, the manual creation of data stories is a multifaceted, labor-intensive, and case-specific effort, limiting their broader application. As a result, automating the creation of data stories has emerged as a significant research thrust. Despite advances in Artificial Intelligence, the systematic generation of data stories remains challenging due to their hybrid nature: they must frame a perspective based on a seed idea in a top-down manner, similar to traditional storytelling, while coherently grounding insights of given evidence in a bottom-up fashion, akin to data analysis. These dual requirements necessitate precise constraints on the permissible space of a data story. In this viewpoint, we propose integrating constraints into the data story generation process. Defined upon the hierarchies of interpretation and articulation, constraints shape both narrations and illustrations to align with seed ideas and contextualized evidence. We identify the taxonomy and required functionalities of these constraints. Although constraints can be heterogeneous and latent, we explore the potential to represent them in a computation-friendly fashion via Domain-Specific Languages. We believe that leveraging constraints will balance the artistic and engineering aspects of data story generation.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"yu.zhe.s.shi@gmail.com","is_corresponding":true,"name":"Yu-Zhe Shi"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"haotian.li@connect.ust.hk","is_corresponding":false,"name":"Haotian Li"},{"affiliations":["Peking University, Beijing, China"],"email":"ruanlecheng@whai.pku.edu.cn","is_corresponding":false,"name":"Lecheng Ruan"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yu-Zhe Shi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-storygenai0","slot_id":"w-storygenai-6168","time_end":"","time_stamp":"","time_start":"","title":"Constraint representation towards precise data-driven storytelling","uid":"w-storygenai-6168","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-storygenai-7043":{"abstract":"Creating data stories from raw data is challenging due to humans\u2019 limited attention spans and the need for specialized skills. Recent advancements in large language models (LLMs) offer great opportunities to develop systems with autonomous agents to streamline the data storytelling workflow. Though multi-agent systems have benefits such as fully realizing LLM potentials with decomposed tasks for individual agents, designing such systems also faces challenges in task decomposition, performance optimization for sub-tasks, and workflow design. To better understand these issues, we develop Data Director, an LLM-based multi-agent system designed to automate the creation of animated data videos, a representative genre of data stories. Data Director interprets raw data, breaks down tasks, designs agent roles to make informed decisions automatically, and seamlessly integrates diverse components of data videos. A case study demonstrates Data Director\u2019s effectiveness in generating data videos. Throughout development, we have derived lessons learned from addressing challenges, guiding further advancements in autonomous agents for data storytelling. We also shed light on future directions for global optimization, human-in-the-loop design, and the application of advanced multi-modal LLMs.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"lshenaj@connect.ust.hk","is_corresponding":true,"name":"Leixian Shen"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"haotian.li@connect.ust.hk","is_corresponding":false,"name":"Haotian Li"},{"affiliations":["Microsoft, Beijing, China"],"email":"yunvvang@gmail.com","is_corresponding":false,"name":"Yun Wang"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leixian Shen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-storygenai0","slot_id":"w-storygenai-7043","time_end":"","time_stamp":"","time_start":"","title":"From Data to Story: Towards Automatic Animated Data Video Creation with LLM-based Multi-Agent Systems","uid":"w-storygenai-7043","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-storygenai-7072":{"abstract":"Crafting accurate and insightful narratives from data visualization is essential in data storytelling. Like creative writing, where one reads to write a story, data professionals must effectively ``read\" visualizations to create compelling data stories. In education, helping students develop these skills can be achieved through exercises that ask them to create narratives from data plots, demonstrating both ``show\" (describing the plot) and ``tell\" (interpreting the plot). Providing formative feedback on these exercises is crucial but challenging in large-scale educational settings with limited resources. This study explores using GPT-4o, a multimodal LLM, to generate and evaluate narratives from data plots. The LLM was tested in zero-shot, one-shot, and two-shot scenarios, generating narratives and self-evaluating their depth. Human experts also assessed the LLM's outputs. Additionally, the study developed machine learning and LLM-based models to assess student-generated narratives using LLM-generated data. Human experts validated a subset of these machine assessments. The findings highlight the potential of LLMs to support scalable formative assessment in teaching data storytelling skills, which has important implications for AI-supported educational interventions.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland Baltimore County, Baltimore, United States"],"email":"narens1@umbc.edu","is_corresponding":true,"name":"Naren Sivakumar"},{"affiliations":["University of Maryland, Baltimore County, Baltimore, United States"],"email":"lujiec@umbc.edu","is_corresponding":false,"name":"Lujie Karen Chen"},{"affiliations":["University of Maryland,Baltimore County, Baltimore, United States"],"email":"io11937@umbc.edu","is_corresponding":false,"name":"Pravalika Papasani"},{"affiliations":["University of maryland baltimore county, Hanover, United States"],"email":"vignam1@umbc.edu","is_corresponding":false,"name":"Vigna Majmundar"},{"affiliations":["Towson University, Towson, United States"],"email":"jfeng@towson.edu","is_corresponding":false,"name":"Jinjuan Heidi Feng"},{"affiliations":["SRI International, Menlo Park, United States"],"email":"louise.yarnall@sri.com","is_corresponding":false,"name":"Louise Yarnall"},{"affiliations":["University of Alabama, Tuscaloosa, United States"],"email":"jgong@umbc.edu","is_corresponding":false,"name":"Jiaqi Gong"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Naren Sivakumar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-storygenai0","slot_id":"w-storygenai-7072","time_end":"","time_stamp":"","time_start":"","title":"Show and Tell: Exploring Large Language Model\u2019s Potential in Formative Educational Assessment of Data Stories","uid":"w-storygenai-7072","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-topoinvis-1027":{"abstract":"Advances in high-performance computing require new ways to represent large-scale scientific data to support data storage, data transfers, and data analysis within scientific workflows. Multivariate functional approximation (MFA) has recently emerged as a new continuous meshless representation that approximates raw discrete data with a set of piecewise smooth functions. An MFA model of data thus offers a compact representation and supports high-order evaluation of values and derivatives anywhere in the domain. In this paper, we present CPE-MFA, the first critical point extraction framework designed for MFA models of large-scale, high-dimensional data. CPE-MFA extracts critical points directly from an MFA model without the need for discretization or resampling. This is the first step toward enabling continuous implicit models such as MFA to support topological data analysis at scale.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"guanqunma94@gmail.com","is_corresponding":true,"name":"Guanqun Ma"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"dlenz@anl.gov","is_corresponding":false,"name":"David Lenz"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"tpeterka@mcs.anl.gov","is_corresponding":false,"name":"Tom Peterka"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Guanqun Ma"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-topoinvis0","slot_id":"w-topoinvis-1027","time_end":"","time_stamp":"","time_start":"","title":"Critical Point Extraction from Multivariate Functional Approximation","uid":"w-topoinvis-1027","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-topoinvis-1031":{"abstract":"3D symmetric tensor fields have a wide range of applications in science and engineering. The topology of such fields can provide critical insight into not only the structures in tensor fields but also their respective applications. Existing research focuses on the extraction of topological features such as degenerate curves and neutral surfaces. In this paper, we investigate the asymptotic behaviors of these topological features in the sphere of infinity. Our research leads to both theoretical analysis and observations that can aid further classifications of tensor field topology.","accessible_pdf":false,"authors":[{"affiliations":["Oregon State University, Corvallis, United States"],"email":"linxinw@oregonstate.edu","is_corresponding":false,"name":"Xinwei Lin"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhangyue@oregonstate.edu","is_corresponding":false,"name":"Yue Zhang"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhange@eecs.oregonstate.edu","is_corresponding":true,"name":"Eugene Zhang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Eugene Zhang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-topoinvis0","slot_id":"w-topoinvis-1031","time_end":"","time_stamp":"","time_start":"","title":"Asymptotic Topology of 3D Linear Symmetric Tensor Fields","uid":"w-topoinvis-1031","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-topoinvis-1033":{"abstract":"Jacobi sets are an important method to investigate the relationship between Morse functions. The Jacobi set for two Morse functions is the set of all points where the functions' gradients are linearly dependent. Both the segmentation of the domain by Jacobi sets and the Jacobi sets themselves have proven to be useful tools in multi-field visualization, data analysis in various applications, and for accelerating extraction algorithms. On a triangulated grid, they can be calculated by a piecewise linear interpolation. In practice, Jacobi sets can become very complex and large due to noise and numerical errors. Some techniques for simplifying Jacobi sets exist, but these only reduce individual elements such as noise or are purely theoretical. These techniques often only change the visual representation of the Jacobi sets, but not the underlying data. In this paper, we present an algorithm that simplifies the Jacobi sets for 2D bivariate scalar fields and at the same time modifies the underlying bivariate scalar fields while preserving the essential structures of the fields. We use a neighborhood graph to select the areas to be reduced and collapse these cells individually. We investigate the influence of different neighborhood graphs and present an adaptation for the visualization of Jacobi sets that take the collapsed cells into account. We apply our algorithm to a range of analytical and real-world data sets and compare it with established methods that also simplify the underlying bivariate scalar fields.","accessible_pdf":false,"authors":[{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"raith@informatik.uni-leipzig.de","is_corresponding":true,"name":"Felix Raith"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"scheuermann@informatik.uni-leipzig.de","is_corresponding":false,"name":"Gerik Scheuermann"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"heine@informatik.uni-leipzig.de","is_corresponding":false,"name":"Christian Heine"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Felix Raith"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-topoinvis0","slot_id":"w-topoinvis-1033","time_end":"","time_stamp":"","time_start":"","title":"Topological Simplifcation of Jacobi Sets for Piecewise-Linear Bivariate 2D Scalar Fields","uid":"w-topoinvis-1033","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-topoinvis-1034":{"abstract":"The Morse-Smale complex is a standard tool in visual data analysis. The classic definition is based on a continuous view of the gradient of a scalar function where its zeros are the critical points. These points are connected via gradient curves and surfaces emanating from saddle points, known as separatrices. In a discrete setting, the Morse-Smale complex is commonly extracted by constructing a combinatorial gradient assuming the steepest descent direction. Previous works have shown that this method results in a geometric embedding of the separatrices that can be fundamentally different from those in the continuous case. To achieve a similar embedding, different approaches for constructing a combinatorial gradient were proposed. In this paper, we show that these approaches generate a different topology, i.e., the connectivity between critical points changes. Additionally, we demonstrate that the steepest descent method can compute topologically and geometrically accurate Morse-Smale complexes when applied to certain types of grids. Based on these observations, we suggest a method to attain both geometric and topological accuracy for the Morse-Smale complex of data sampled on a uniform grid.","accessible_pdf":false,"authors":[{"affiliations":["KTH Royal Institute of Technology, Stockholm, Sweden"],"email":"sonlt@kth.se","is_corresponding":true,"name":"Son Le Thanh"},{"affiliations":["KTH Royal Institute of Technology, Stockholm, Sweden"],"email":"ankele@iai.uni-bonn.de","is_corresponding":false,"name":"Michael Ankele"},{"affiliations":["KTH Royal Institute of Technology, Stockholm, Sweden"],"email":"weinkauf@kth.se","is_corresponding":false,"name":"Tino Weinkauf"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Son Le Thanh"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-topoinvis0","slot_id":"w-topoinvis-1034","time_end":"","time_stamp":"","time_start":"","title":"Revisiting Accurate Geometry for the Morse-Smale Complexes","uid":"w-topoinvis-1034","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-topoinvis-1038":{"abstract":"This paper presents a nested tracking framework for analyzing cycles in 2D force networks within granular materials. These materials are composed of interacting particles, whose interactions are described by a force network. Understanding the cycles within these networks at various scales and their evolution under external loads is crucial, as they significantly contribute to the mechanical and kinematic properties of the system. Our approach involves computing a cycle hierarchy by partitioning the 2D domain into regions bounded by cycles in the force network. We can adapt concepts from nested tracking graphs originally developed for merge trees by leveraging the duality between this partitioning and the cycles. We demonstrate the effectiveness of our method on two force networks derived from experiments with photo-elastic disks.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Link\u00f6ping, Sweden"],"email":"farhan.rasheed@liu.se","is_corresponding":true,"name":"Farhan Rasheed"},{"affiliations":["Indian Institute of Science, Bangalore, India"],"email":"abrarnaseer@iisc.ac.in","is_corresponding":false,"name":"Abrar Naseer"},{"affiliations":["Link\u00f6ping university, Norrk\u00f6ping, Sweden"],"email":"emma.nilsson@liu.se","is_corresponding":false,"name":"Emma Nilsson"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"talha.bin.masood@liu.se","is_corresponding":false,"name":"Talha Bin Masood"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"ingrid.hotz@liu.se","is_corresponding":false,"name":"Ingrid Hotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Farhan Rasheed"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-topoinvis0","slot_id":"w-topoinvis-1038","time_end":"","time_stamp":"","time_start":"","title":"Multi-scale Cycle Tracking in Dynamic Planar Graphs","uid":"w-topoinvis-1038","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-topoinvis-1041":{"abstract":"Tetrahedral meshes are widely used due to their flexibility and adaptability in representing changes of complex geometries and topology. However, most existing data structures struggle to efficiently encode the irregular connectivity of tetrahedral meshes with billions of vertices. We address this problem by proposing a novel framework for efficient and scalable analysis of large tetrahedral meshes using Apache Spark. The proposed framework, called Tetra-Spark, features optimized approaches to locally compute many connectivity relations by first retrieving the Vertex-Tetrahedron (VT) relation. This strategy significantly improves Tetra-Spark's efficiency in performing morphology computations on large tetrahedral meshes. To prove the effectiveness and scalability of such a framework, we conduct a comprehensive comparison against a vanilla Spark implementation for the analysis of tetrahedral meshes. Our experimental evaluation shows that Tetra-Spark achieves up to a 78x speedup and reduces memory usage by up to 80% when retrieving connectivity relations with the VT relation available. This optimized design further accelerates subsequent morphology computations, resulting in up to a 47.7x speedup.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, College Park, United States"],"email":"yhqian@umd.edu","is_corresponding":true,"name":"Yuehui Qian"},{"affiliations":["Clemson University, Clemson, United States"],"email":"guoxil@clemson.edu","is_corresponding":false,"name":"Guoxi Liu"},{"affiliations":["Clemson University, Clemson, United States"],"email":"fiurici@clemson.edu","is_corresponding":false,"name":"Federico Iuricich"},{"affiliations":["University of Maryland, College Park, United States"],"email":"deflo@umiacs.umd.edu","is_corresponding":false,"name":"Leila De Floriani"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuehui Qian"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-topoinvis0","slot_id":"w-topoinvis-1041","time_end":"","time_stamp":"","time_start":"","title":"Efficient representation and analysis for a large tetrahedral mesh using Apache Spark","uid":"w-topoinvis-1041","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-uncertainty-1007":{"abstract":"Symmetric second-order tensors are fundamental in various scientific and engineering domains, as they can represent properties such as material stresses or diffusion processes in brain tissue. In recent years, several approaches have been introduced and improved to analyze these fields using topological features, such as degenerate tensor locations, i.e., the tensor has repeated eigenvalues, or normal surfaces. Traditionally, the identification of such features has been limited to single tensor fields. However, it has become common to create ensembles to account for uncertainties and variability in simulations and measurements. In this work, we explore novel methods for describing and visualizing degenerate tensor locations in 3D symmetric second-order tensor field ensembles. We base our considerations on the tensor mode and analyze its practicality in characterizing the uncertainty of degenerate tensor locations before proposing a variety of visualization strategies to effectively communicate degenerate tensor information. We demonstrate our techniques for synthetic and simulation data sets.The results indicate that the interplay of different descriptions for uncertainty can effectively convey information on degenerate tensor locations.","accessible_pdf":false,"authors":[{"affiliations":["University of Cologne, Cologne, Germany"],"email":"tadea.schmitz@uni-koeln.de","is_corresponding":false,"name":"Tadea Schmitz"},{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"gerrits@vis.rwth-aachen.de","is_corresponding":true,"name":"Tim Gerrits"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tim Gerrits"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1007","time_end":"","time_stamp":"","time_start":"","title":"Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles","uid":"w-uncertainty-1007","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-uncertainty-1009":{"abstract":"Understanding and communicating data uncertainty is crucial for informed decision-making across various domains, including finance, healthcare, and public policy. This study investigates the impact of gender and acoustic variables on decision-making, confidence, and trust through a crowdsourced experiment. We compared visualization-only representations of uncertainty to text-forward and speech-forward bimodal representations, including multiple synthetic voices across gender. Speech-forward representations led to an increase in risky decisions, and text-forward representations led to lower confidence. Contrary to prior work, speech-forward forecasts did not receive higher ratings of trust. Higher normalized pitch led to a slight increase in decision confidence, but other voice characteristics had minimal impact on decisions and trust. An exploratory analysis of accented speech showed consistent results with the main experiment and additionally indicated lower trust ratings for information presented in Indian and Kenyan accents. The results underscore the importance of considering acoustic and contextual factors in presentation of data uncertainty.","accessible_pdf":false,"authors":[{"affiliations":["University of California Berkeley, Berkeley, United States"],"email":"chase_stokes@berkeley.edu","is_corresponding":true,"name":"Chase Stokes"},{"affiliations":["Stanford University, Stanford, United States"],"email":"sanker@stanford.edu","is_corresponding":false,"name":"Chelsea Sanker"},{"affiliations":["Versalytix, Columbus, United States"],"email":"bcogley@versalytix.com","is_corresponding":false,"name":"Bridget Cogley"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chase Stokes"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1009","time_end":"","time_stamp":"","time_start":"","title":"Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty","uid":"w-uncertainty-1009","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-uncertainty-1010":{"abstract":"The increasing adoption of Deep Neural Networks (DNNs) has led to their application in many challenging scientific visualization tasks. While advanced DNNs offer impressive generalization capabilities, understanding factors such as model prediction quality, robustness, and uncertainty is crucial. These insights can enable domain scientists to make informed decisions about their data. However, DNNs inherently lack ability to estimate prediction uncertainty, necessitating new research to construct robust uncertainty-aware visualization techniques tailored for various visualization tasks. In this work, we propose uncertainty-aware implicit neural representations to model scalar field data sets effectively and comprehensively study the efficacy and benefits of estimated uncertainty information for volume visualization tasks. We evaluate the effectiveness of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout (MCDropout). These techniques enable uncertainty-informed volume visualization in scalar field data sets. Our extensive exploration across multiple data sets demonstrates that uncertainty-aware models produce informative volume visualization results. Moreover, integrating prediction uncertainty enhances the trustworthiness of our DNN model, making it suitable for robustly analyzing and visualizing real-world scientific volumetric data sets.","accessible_pdf":false,"authors":[{"affiliations":["IIT kanpur , Kanpur , India"],"email":"saklanishanu@gmail.com","is_corresponding":false,"name":"Shanu Saklani"},{"affiliations":["Indian Institute of Technology Kanpur, Kanpur, India"],"email":"chitwangoel1010@gmail.com","is_corresponding":false,"name":"Chitwan Goel"},{"affiliations":["Indian Institute of Technology Kanpur, Kanpur, India"],"email":"shrey.bansal75@gmail.com","is_corresponding":false,"name":"Shrey Bansal"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"jay.wang@rutgers.edu","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India"],"email":"soumya.cvpr@gmail.com","is_corresponding":true,"name":"Soumya Dutta"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Soumya Dutta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1010","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty-Informed Volume Visualization using Implicit Neural Representation","uid":"w-uncertainty-1010","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-uncertainty-1011":{"abstract":"Current research provides methods to communicate uncertainty and adapts classical algorithms of the visualization pipeline to take the uncertainty into account. Various existing visualization frameworks include methods to present uncertain data but do not offer transformation techniques tailored to uncertain data. Therefore, we propose a software package for uncertainty-aware data analysis in Python (UADAPy) offering methods for uncertain data along the visualization pipeline. We aim to provide a platform that is the foundation for further integration of uncertainty algorithms and visualizations. It provides common utility functionality to support research in uncertainty-aware visualization algorithms and makes state-of-the-art research results accessible to the end user. The project is available at https://github.com/UniStuttgart-VISUS/uadapy.","accessible_pdf":false,"authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"patrick.paetzold@uni-konstanz.de","is_corresponding":true,"name":"Patrick Paetzold"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"david.haegele@visus.uni-stuttgart.de","is_corresponding":false,"name":"David H\u00e4gele"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"m_ever14@uni-muenster.de","is_corresponding":false,"name":"Marina Evers"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"oliver.deussen@uni-konstanz.de","is_corresponding":false,"name":"Oliver Deussen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Patrick Paetzold"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1011","time_end":"","time_stamp":"","time_start":"","title":"UADAPy: An Uncertainty-Aware Visualization and Analysis Toolbox","uid":"w-uncertainty-1011","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-uncertainty-1012":{"abstract":"Uncertainty visualization is an emerging research topic in data vi- sualization because neglecting uncertainty in visualization can lead to inaccurate assessments. In this short paper, we study the prop- agation of multivariate data uncertainty in visualization. Although there have been a few advancements in probabilistic uncertainty vi- sualization of multivariate data, three critical challenges remain to be addressed. First, state-of-the-art probabilistic uncertainty visual- ization framework is limited to bivariate data (two variables). Sec- ond, the existing uncertainty visualization algorithms use compu- tationally intensive techniques and lack support for cross-platform portability. Third, as a consequence of the computational expense, integration into interactive production visualization tools is imprac- tical. In this work, we address all three issues and make a threefold contribution. First, we generalize the state-of-the-art probabilis- tic framework for bivariate data to multivariate data with a arbi- trary number of variables. Second, through utilization of VTK-m\u2019s shared-memory parallelism and cross-platform compatibility fea- tures, we demonstrate acceleration of multivariate uncertainty visu- alization on different many-core architectures, including OpenMP and AMD GPUs. Third, we demonstrate the integration of our al- gorithms with the ParaView software. We demonstrate utility of our algorithms through experiments on multivariate simulation data.","accessible_pdf":false,"authors":[{"affiliations":["Indiana University Bloomington, Bloomington, United States"],"email":"gautamhari@outlook.com","is_corresponding":true,"name":"Gautam Hari"},{"affiliations":["Indiana University Bloomington, Bloomington, United States"],"email":"nrushad2001@gmail.com","is_corresponding":false,"name":"Nrushad A Joshi"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"jay.wang@rutgers.edu","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"gongq@ornl.gov","is_corresponding":false,"name":"Qian Gong"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kmorel@acm.org","is_corresponding":false,"name":"Kenneth Moreland"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"klasky@ornl.gov","is_corresponding":false,"name":"Scott Klasky"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pnorbert@ornl.gov","is_corresponding":false,"name":"Norbert Podhorszki"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gautam Hari"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1012","time_end":"","time_stamp":"","time_start":"","title":"FunM^2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices","uid":"w-uncertainty-1012","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-uncertainty-1013":{"abstract":"Uncertainty is inherent to most data, including vector field data, yet it is often omitted in visualizations and representations. Effective uncertainty visualization can enhance the understanding and interpretability of vector field data. For instance, in the context of severe weather events such as hurricanes and wildfires, effective uncertainty visualization can provide crucial insights about fire spread or hurricane behavior and aid in resource management and risk mitigation. Glyphs are commonly used for representing vector uncertainty but are often limited to 2D. In this work, we present a glyph-based technique for accurately representing 3D vector uncertainty and a comprehensive framework for visualization, exploration, and analysis using our new glyphs. We employ hurricane and wildfire examples to demonstrate the efficacy of our glyph design and visualization tool in conveying vector field uncertainty.","accessible_pdf":false,"authors":[{"affiliations":["Scientific Computing and Imaging Institute, Salk Lake City, United States"],"email":"touermi@sci.utah.edu","is_corresponding":true,"name":"Timbwaoga A. J. Ouermi"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"zbmorro@sandia.gov","is_corresponding":false,"name":"Zachary Morrow"},{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"bartv@sandia.gov","is_corresponding":false,"name":"Bart van Bloemen Waanders"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Timbwaoga A. J. Ouermi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1013","time_end":"","time_stamp":"","time_start":"","title":"Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Field","uid":"w-uncertainty-1013","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-uncertainty-1014":{"abstract":"Isosurface visualization is fundamental for exploring and analyzing 3D volumetric data. Marching cubes (MC) algorithms with linear interpolation are commonly used for isosurface extraction and visualization. Although linear interpolation is easy to implement, it has limitations when the underlying data is complex and high-order, which is the case for most real-world data. Linear interpolation can output vertices at the wrong location. Its inability to deal with sharp features and features smaller than grid cells can create holes and broken pieces in the extracted isosurface. Despite these limitations, isosurface visualizations typically do not include insight into the spatial location and the magnitude of these errors. We utilize high-order interpolation methods with MC algorithms and interactive visualization to highlight these uncertainties. Our visualization tool helps identify the regions of high interpolation errors. It also allows users to query local areas for details and compare the differences between isosurfaces from different interpolation methods. In addition, we employ high-order methods to identify and reconstruct possible features that linear methods cannot detect. We showcase how our visualization tool helps explore and understand the extracted isosurface errors through synthetic and real-world data.","accessible_pdf":false,"authors":[{"affiliations":["Scientific Computing and Imaging Institute, Salk Lake City, United States"],"email":"touermi@sci.utah.edu","is_corresponding":true,"name":"Timbwaoga A. J. Ouermi"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Timbwaoga A. J. Ouermi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1014","time_end":"","time_stamp":"","time_start":"","title":"Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods","uid":"w-uncertainty-1014","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-uncertainty-1015":{"abstract":"Functional depth is a well-known technique used to derive descriptive statistics (e.g., median, quartiles, and outliers) for 1D data. Surface boxplots extend this concept to ensembles of images, helping scientists and users identify representative and outlier images. However, the computational time for surface boxplots increases cubically with the number of ensemble members, making it impractical for integration into visualization tools. In this paper, we propose a deep-learning solution for efficient depth prediction and computation of surface boxplots for time-varying ensemble data. Our deep learning framework accurately predicts member depths in a surface boxplot, achieving average speedups of 6X on a CPU and 15X on a GPU for the 2D Red Sea dataset with 50 ensemble members compared to the traditional depth computation algorithm. Our approach achieves at least a 99\\% level of rank preservation, with order flipping occurring only at pairs with extremely similar depth values that pose no statistical differences. This local flipping does not significantly impact the overall depth order of the ensemble members.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"mengjiao@sci.utah.edu","is_corresponding":true,"name":"Mengjiao Han"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mengjiao Han"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1015","time_end":"","time_stamp":"","time_start":"","title":"Accelerated Depth Computation for Surface Boxplots with Deep Learning","uid":"w-uncertainty-1015","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-uncertainty-1016":{"abstract":"Wildfire poses substantial risks to our health, environment, and economy. Studying wildfire is challenging due to its complex inter- action with the atmosphere dynamics and the terrain. Researchers have employed ensemble simulations to study the relationship be- tween variables and mitigate uncertainties in unpredictable initial conditions. However, many domain scientists are unaware of the advanced visualization tools available for conveying uncertainty. To bring some uncertainty visualization techniques, we build an interactive visualization system that utilizes a band-depth-based method that provides a statistical summary and visualization for fire front contours from the ensemble. We augment the visualiza- tion system with capabilities to study wildfires as a dynamic system. In this paper, We demonstrate how our system can support domain scientists in studying fire spread patterns, identifying outlier simu- lations, and navigating to interesting instances based on a summary of events.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":true,"name":"Jixian Li"},{"affiliations":["Scientific Computing and Imaging Institute, Salk Lake City, United States"],"email":"touermi@sci.utah.edu","is_corresponding":false,"name":"Timbwaoga A. J. Ouermi"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jixian Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1016","time_end":"","time_stamp":"","time_start":"","title":"Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations","uid":"w-uncertainty-1016","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-uncertainty-1017":{"abstract":"Uncertainty visualization is a key component in translating important insights from ensemble data into actionable decision-making by visually conveying various aspects of uncertainty within a system. With the recent advent of fast surrogate models for computationally expensive simulations, users can interact with more aspects of data spaces than ever before. However, the integration of ensemble data with surrogate models in a decision-making tool brings up new challenges for uncertainty visualization, namely how to reconcile and communicate the new and different types of uncertainties brought in by surrogates and how to utilize these new data estimates in actionable ways. In this work, we examine these issues as they relate to high-dimensional data visualization, the integration of discrete datasets and the continuous representations of those datasets, and the unique difficulties associated with systems that allow users to iterate between input and output spaces. We assess the role of uncertainty visualization in facilitating intuitive and actionable interaction with ensemble data and surrogate models, and highlight key challenges in this new frontier of computational simulation.","accessible_pdf":false,"authors":[{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"sam.molnar@nrel.gov","is_corresponding":true,"name":"Sam Molnar"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"jd.laurencechasen@nrel.gov","is_corresponding":false,"name":"J.D. Laurence-Chasen"},{"affiliations":["The Ohio State University, Columbus, United States","National Renewable Energy Lab, Golden, United States"],"email":"duan.418@osu.edu","is_corresponding":false,"name":"Yuhan Duan"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"julie.bessac@nrel.gov","is_corresponding":false,"name":"Julie Bessac"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"kristi.potter@nrel.gov","is_corresponding":false,"name":"Kristi Potter"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sam Molnar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1017","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models","uid":"w-uncertainty-1017","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-uncertainty-1018":{"abstract":"Although people frequently make decisions based on uncertain forecasts about future events, there is little guidance about how best to represent the uncertainty in forecasts. One common approach is to use multiple forecast visualizations, in which multiple forecasts are plotted on the same graph. This provides an implicit representation of the uncertainty in the data, but it is not clear how many forecasts to show, or how viewers might be influenced by seeing the more extreme forecasts rather than those closer to the mean. In this study, we showed participants forecasts of wind speed data and they made decisions based on their predictions about the future wind speed. We allowed participants to choose how many forecasts to view prior to making a decision, and we manipulated the ordering of the forecasts and the cost of each additional forecast. We found that participants viewed more forecasts when the outcome was more ambiguous. The order of the forecasts had little impact on their decisions when there was no cost for the additional information. However, when there was a cost for each forecast, the participants were much more likely to make a guess based on only the first forecast shown. In this case, showing one of the extreme forecasts first led to less optimal decisions.","accessible_pdf":false,"authors":[{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"lematze@sandia.gov","is_corresponding":true,"name":"Laura Matzen"},{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"mcstite@sandia.gov","is_corresponding":false,"name":"Mallory C Stites"},{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"kmdivis@sandia.gov","is_corresponding":false,"name":"Kristin M Divis"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"abendeck3@gatech.edu","is_corresponding":false,"name":"Alexander Bendeck"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"john.stasko@cc.gatech.edu","is_corresponding":false,"name":"John Stasko"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Laura Matzen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1018","time_end":"","time_stamp":"","time_start":"","title":"Effects of Forecast Number, Order, and Cost in Multiple Forecast Visualizations","uid":"w-uncertainty-1018","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-uncertainty-1019":{"abstract":"We present a simple comparative framework for testing and developing uncertainty modeling in uncertain marching cubes implementations. The selection of a model to represent the probability distribution of uncertain values directly influences the memory use, run time, and accuracy of an uncertainty visualization algorithm. We use an entropy calculation directly on ensemble data to establish an expected result and then compare the entropy from various probability models, including uniform, Gaussian, histogram, and quantile models. Our results verify that models matching the distribution of the ensemble indeed match the entropy. We further show that fewer bins in nonparametric histogram models are more effective whereas large numbers of bins in quantile models approach data accuracy.","accessible_pdf":false,"authors":[{"affiliations":["University of Illinois Urbana-Champaign, Urbana, United States"],"email":"sisneros@illinois.edu","is_corresponding":true,"name":"Robert Sisneros"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kmorel@acm.org","is_corresponding":false,"name":"Kenneth Moreland"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Robert Sisneros"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1019","time_end":"","time_stamp":"","time_start":"","title":"An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations","uid":"w-uncertainty-1019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-vis4climate-1000":{"abstract":"re","accessible_pdf":false,"authors":[{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":true,"name":"Fanny Chevalier"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fanny Chevalier"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-vis4climate0","slot_id":"w-vis4climate-1000","time_end":"","time_stamp":"","time_start":"","title":"TEST - Le papier","uid":"w-vis4climate-1000","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-vis4climate-1008":{"abstract":"Presenting the effects of and effective countermeasures for climate change is a significant challenge in science communication. Data-driven storytelling and narrative visualization can be part of the solution. However, the communication is limited when restricted to global or cross-regional scales, as climate effects are particular to the location and adaptions need to be local. In this work, we focus on data-driven storytelling that communicates local impacts of climate change. We analyze the adoption of data-driven storytelling by local news media in addressing climate-related topics. Further, we investigate the specific characteristics of the local scenario and present three application examples to showcase potential local data-driven stories. Since these examples are rooted in university teaching, we also discuss educational aspects. Finally, we summarize the interdisciplinary research challenges and opportunities for application associated with data-driven storytelling in a local context.","accessible_pdf":false,"authors":[{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"fabian.beck@uni-bamberg.de","is_corresponding":true,"name":"Fabian Beck"},{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"lukas.panzer@uni-bamberg.de","is_corresponding":false,"name":"Lukas Panzer"},{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"marc.redepenning@uni-bamberg.de","is_corresponding":false,"name":"Marc Redepenning"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fabian Beck"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-vis4climate0","slot_id":"w-vis4climate-1008","time_end":"","time_stamp":"","time_start":"","title":"Local Climate Data Stories: Data-driven Storytelling to Communicate Effects and Mitigation of Climate Change in a Local Context","uid":"w-vis4climate-1008","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-vis4climate-1011":{"abstract":"Climate change\u2019s global impact calls for coordinated visualization efforts to enhance collaboration and communication among key partners such as domain experts, community members, and policy makers. We present a collaborative initiative, EcoViz, where visualization practitioners and key partners co-designed environmental data visualizations to illustrate impacts on ecosystems and the benefit of informed management and nature-based solutions. Our three use cases rely on unique processing pipelines to represent time-dependent natural phenomena by combining cinematic, scientific, and information visualization methods. Scientific outputs are displayed through narrative data-driven animations, interactive geospatial web applications, and immersive Unreal Engine applications. Each field\u2019s decision-making process is specific, driving design decisions about the best representation and medium for each use case. Data-driven cinematic videos with simple charts and minimal annotations proved most effective for engaging large, diverse audiences. This flexible medium facilitates reuse, maintains critical details, and integrates well into broader narrative videos. The need for interdisciplinary visualizations highlights the importance of funding to integrate visualization practitioners throughout the scientific process to better translate data and knowledge into informed policy and practice.","accessible_pdf":false,"authors":[{"affiliations":["University of California, San Diego, San Diego, United States"],"email":"jkb@ucsc.edu","is_corresponding":true,"name":"Jessica Marielle Kendall-Bar"},{"affiliations":["University of California, San Diego, La Jolla, United States"],"email":"inealey@ucsd.edu","is_corresponding":false,"name":"Isaac Nealey"},{"affiliations":["University of California, Santa Cruz, Santa Cruz, United States"],"email":"icostell@ucsc.edu","is_corresponding":false,"name":"Ian Costello"},{"affiliations":["University of California, Santa Cruz, Santa Cruz, United States"],"email":"chlowrie@ucsc.edu","is_corresponding":false,"name":"Christopher Lowrie"},{"affiliations":["University of California, San Diego, San Diego, United States"],"email":"khn009@ucsd.edu","is_corresponding":false,"name":"Kevin Huynh Nguyen"},{"affiliations":["University of California San Diego, La Jolla, United States"],"email":"pponganis@ucsd.edu","is_corresponding":false,"name":"Paul J. Ponganis"},{"affiliations":["University of California, Santa Cruz, Santa Cruz, United States"],"email":"mwbeck@ucsc.edu","is_corresponding":false,"name":"Michael W. Beck"},{"affiliations":["University of California, San Diego, San Diego, United States"],"email":"ialtintas@ucsd.edu","is_corresponding":false,"name":"\u0130lkay Alt\u0131nta\u015f"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jessica Marielle Kendall-Bar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-vis4climate0","slot_id":"w-vis4climate-1011","time_end":"","time_stamp":"","time_start":"","title":"EcoViz: an iterative methodology for designing multifaceted data-driven environmental visualizations that communicate ecosystem impacts and envision nature-based solutions","uid":"w-vis4climate-1011","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-vis4climate-1018":{"abstract":"Household consumption significantly impacts climate change. Yet designing interventions to encourage consumption reduction that are tailored to each home's needs remains challenging. To address this, we developed Eco-Garden, a data sculpture designed to visualise household consumption aiming to promote sustainable practices. Eco-Garden serves as both an aesthetic piece for visitors and a functional tool for household members to understand their resource consumption. In this paper, we present the human-centred design process of Eco-Garden and the preliminary findings we made through the field study. We conducted a field study with 15 households to explore participants' experience with Eco-Garden and its potential to encourage sustainable practices at home. Our participants provided positive feedback on integrating Eco-Garden into their homes, highlighting considerations such as aesthetics, physicality, calm manner of presenting consumption data. Our Insights contribute to developing data sculptures for households that can facilitate meaningful interactions with consumption data.","accessible_pdf":false,"authors":[{"affiliations":["Cardiff University, UK, Cardiff, United Kingdom"],"email":"pereraud@cardiff.ac.uk","is_corresponding":true,"name":"Dushani Ushettige"},{"affiliations":["Cardiff University, Cardiff, United Kingdom"],"email":"verdezotodiasn@cardiff.ac.uk","is_corresponding":false,"name":"Nervo Verdezoto"},{"affiliations":["Cardiff University, Cardiff, United Kingdom"],"email":"lannon@cardiff.ac.uk","is_corresponding":false,"name":"Simon Lannon"},{"affiliations":["Cardiff Universiy, Cardiff, United Kingdom"],"email":"gwilliamja@cardiff.ac.uk","is_corresponding":false,"name":"Jullie Gwilliam"},{"affiliations":["Cardiff University, Cardiff, United Kingdom"],"email":"eslambolchilarp@cardiff.ac.uk","is_corresponding":false,"name":"Parisa Eslambolchilar"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dushani Ushettige"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-vis4climate0","slot_id":"w-vis4climate-1018","time_end":"","time_stamp":"","time_start":"","title":"Eco-Garden: A Data Sculpture to Encourage Sustainable Practices in Everyday Life in Households","uid":"w-vis4climate-1018","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-vis4climate-1023":{"abstract":"Consumers have the potential to play a large role in mitigating the climate crisis by taking on more pro-environmental behavior, for example by making more sustainable food choices. However, while environmental awareness is common among consumers, it is not always clear what the current impact of one's own food choices are, and consequently it is not always clear how or why their own behavior must change, or how important the change is. Immersive technologies have been shown to aid in these aspects. In this paper, we bring food production into the home by means of handheld augmented reality. Using the current prototype, users can input which ingredients are in their meal on their smartphone, and after making a 3D scan of their kitchen, plants, livestock, feed, and water required for all are visualized in front of them. In this paper, we describe the design of the current prototype and, by analyzing the current state of research on virtual and augmented reality for sustainability research, we describe in which ways the application could be extended in terms of data, models, and interaction, to investigate the most prominent issues within environmental sustainability communications research.","accessible_pdf":false,"authors":[{"affiliations":["Wageningen University and Research, Wageningen, Netherlands"],"email":"nina.rosa-dejong@wur.nl","is_corresponding":true,"name":"Nina Rosa"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nina Rosa"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-vis4climate0","slot_id":"w-vis4climate-1023","time_end":"","time_stamp":"","time_start":"","title":"AwARe: Using handheld augmented reality for researching the potential of food resource information visualization","uid":"w-vis4climate-1023","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},"w-vis4climate-1024":{"abstract":"This paper details the development and implementation of a collaborative exhibit at Boston\u2019s Museum of Science showcasing interactive data visualizations designed to educate the public on global sustainability and urban environmental concerns. Supported by cross-institutional collaboration, the exhibit provided a rich real-world learning opportunity for students, resulting in a set of public-facing educational resources that informed visitors of global sustainability concerns through the lens of a local municipality. The realization of this project was made possible only by a close collaboration between a municipality, science museum and academic partners, all who committed their expertise and resources at both leadership and implementation team levels.This initiative highlights the value of cross-institutional collaboration to ignite the transformative potential of interactive visualizations in driving public engagement of local and global sustainability issues. Focusing on promoting sustainability and enhancing community well-being, this initiative highlights the potential of cross-institutional collaboration and locally-relevant interactive data visualizations to educate, inspire action, and foster community engagement in addressing climate change and urban sustainability.","accessible_pdf":false,"authors":[{"affiliations":["Brown University, Providence, United States","Rhode Island School of Design, Providence, United States"],"email":"bae@brown.edu","is_corresponding":true,"name":"Beth Altringer Eagle"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"sylvan@media.mit.edu","is_corresponding":false,"name":"Elisabeth Sylvan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Beth Altringer Eagle"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-vis4climate0","slot_id":"w-vis4climate-1024","time_end":"","time_stamp":"","time_start":"","title":"Cultivating Climate Action Through Multi-Institutional Collaboration: Innovative Data Visualization Educational Programs and Exhibits for Public Engagement","uid":"w-vis4climate-1024","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}} diff --git a/program/serve_session_list.json b/program/serve_session_list.json index 74f9ebea0..e5990f220 100644 --- a/program/serve_session_list.json +++ b/program/serve_session_list.json @@ -1 +1 @@ -{"a-biomedchallenge":{"event":"Bio+MedVis Challenges","event_description":"","event_prefix":"a-biomedchallenge","event_type":"associated","event_url":"","long_name":"Bio+MedVis Challenges","organizers":[],"sessions":[]},"a-ldav":{"event":"LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization","event_description":"","event_prefix":"a-ldav","event_type":"associated","event_url":"","long_name":"LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization","organizers":[],"sessions":[]},"a-scivis-contest":{"event":"SciVis Contest","event_description":"","event_prefix":"a-scivis-contest","event_type":"associated","event_url":"","long_name":"SciVis Contest","organizers":[],"sessions":[]},"a-visap":{"event":"VIS Arts Program","event_description":"","event_prefix":"a-visap","event_type":"visap","event_url":"","long_name":"VIS Arts Program","organizers":[],"sessions":[]},"a-visinpractice":{"event":"VisInPractice","event_description":"","event_prefix":"a-visinpractice","event_type":"associated","event_url":"","long_name":"VisInPractice","organizers":[],"sessions":[]},"a-vizsec":{"event":"VizSec","event_description":"","event_prefix":"a-vizsec","event_type":"associated","event_url":"","long_name":"VizSec","organizers":[],"sessions":[]},"conf":{"event":"Conference Events","event_description":"","event_prefix":"conf","event_type":"vis","event_url":"","long_name":"Conference Events","organizers":[],"sessions":[]},"s-vds":{"event":"VDS: Visualization in Data Science Symposium","event_description":"","event_prefix":"s-vds","event_type":"associated","event_url":"","long_name":"VDS: Visualization in Data Science Symposium","organizers":[],"sessions":[]},"t-analysis":{"event":"Visualization Analysis and Design","event_description":"","event_prefix":"t-analysis","event_type":"tutorial","event_url":"","long_name":"Visualization Analysis and Design","organizers":[],"sessions":[]},"t-color":{"event":"Generating Color Schemes for our Data Visualizations","event_description":"","event_prefix":"t-color","event_type":"tutorial","event_url":"","long_name":"Generating Color Schemes for our Data Visualizations","organizers":[],"sessions":[]},"t-immersive":{"event":"Developing Immersive and Collaborative Visualizations with Web Technologies","event_description":"","event_prefix":"t-immersive","event_type":"tutorial","event_url":"","long_name":"Developing Immersive and Collaborative Visualizations with Web Technologies","organizers":[],"sessions":[]},"t-llm4vis":{"event":"LLM4Vis: Large Language Models for Information Visualization","event_description":"","event_prefix":"t-llm4vis","event_type":"tutorial","event_url":"","long_name":"LLM4Vis: Large Language Models for Information Visualization","organizers":[],"sessions":[]},"t-nationalscience":{"event":"Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis","event_description":"","event_prefix":"t-nationalscience","event_type":"tutorial","event_url":"","long_name":"Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis","organizers":[],"sessions":[]},"t-participatory":{"event":"Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations","event_description":"","event_prefix":"t-participatory","event_type":"tutorial","event_url":"","long_name":"Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations","organizers":[],"sessions":[]},"t-revisit":{"event":"Running Online User Studies with the reVISit Framework","event_description":"","event_prefix":"t-revisit","event_type":"tutorial","event_url":"","long_name":"Running Online User Studies with the reVISit Framework","organizers":[],"sessions":[]},"v-cga":{"event":"CG&A Invited Partnership Presentations","event_description":"","event_prefix":"v-cga","event_type":"invited","event_url":"","long_name":"CG&A Invited Partnership Presentations","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"v-cga","ff_link":"","session_id":"cga0","session_image":"cga0.png","time_end":"","time_slots":[{"abstract":"We consider the general problem known as job shop scheduling, in which multiple jobs consist of sequential operations that need to be executed or served by appropriate machines having limited capacities. For example, train journeys (jobs) consist of moves and stops (operations) to be served by rail tracks and stations (machines). A schedule is an assignment of the job operations to machines and times where and when they will be executed. The developers of computational methods for job scheduling need tools enabling them to explore how their methods work. At a high level of generality, we define the system of pertinent exploration tasks and a combination of visualizations capable of supporting the tasks. We provide general descriptions of the purposes, contents, visual encoding, properties, and interactive facilities of the visualizations and illustrate them with images from an example implementation in air traffic management. We justify the design of the visualizations based on the tasks, principles of creating visualizations for pattern discovery, and scalability requirements. The outcomes of our research are sufficiently general to be of use in a variety of applications.","accessible_pdf":false,"authors":[{"affiliations":"","email":"gennady.andrienko@iais.fraunhofer.de","is_corresponding":true,"name":"Gennady Andrienko"},{"affiliations":"","email":"natalia.andrienko@iais.fraunhofer.de","is_corresponding":false,"name":"Natalia Andrienko"},{"affiliations":"","email":"jmcordero@e-crida.enaire.es","is_corresponding":false,"name":"Jose Manuel Cordero Garcia"},{"affiliations":"","email":"dirk.hecker@iais.fraunhofer.de","is_corresponding":false,"name":"Dirk Hecker"},{"affiliations":"","email":"georgev@unipi.gr","is_corresponding":false,"name":"George A. Vouros"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gennady Andrienko"],"doi":"10.1109/MCG.2022.3163437","external_paper_link":"","fno":"9745375","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, Schedules, Task Analysis, Optimization, Job Shop Scheduling, Data Analysis, Processor Scheduling, Iterative Methods"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-9745375","time_end":"","time_stamp":"","time_start":"","title":"Supporting Visual Exploration of Iterative Job Scheduling","uid":"v-cga-9745375","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The number of online news articles available nowadays is rapidly increasing. When exploring articles on online news portals, navigation is mostly limited to the most recent ones. The spatial context and the history of topics are not immediately accessible. To support readers in the exploration or research of articles in large datasets, we developed an interactive 3D globe visualization. We worked with datasets from multiple online news portals containing up to 45,000 articles. Using agglomerative hierarchical clustering, we represent the referenced locations of news articles on a globe with different levels of detail. We employ two interaction schemes for navigating the viewpoint on the visualization, including support for hand-held devices and desktop PCs, and provide search functionality and interactive filtering. Based on this framework, we explore additional modules for jointly exploring the spatial and temporal domain of the dataset and incorporating live news into the visualization.","accessible_pdf":false,"authors":[{"affiliations":"","email":"nicholas.ingulfsen@gmail.com","is_corresponding":false,"name":"Nicholas Ingulfsen"},{"affiliations":"","email":"simone.schaub@visinf.tu-darmstadt.de","is_corresponding":false,"name":"Simone Schaub-Meyer"},{"affiliations":"","email":"grossm@inf.ethz.ch","is_corresponding":false,"name":"Markus Gross"},{"affiliations":"","email":"tobias.guenther@fau.de","is_corresponding":true,"name":"Tobias G\u00fcnther"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias G\u00fcnther"],"doi":"10.1109/MCG.2021.3127434","external_paper_link":"","fno":"9612019","has_image":false,"has_pdf":false,"image_caption":"","keywords":["News Articles, Number Of Articles, Headlines, Interactive Visualization, Online News, Agglomerative Clustering, Local News, Interactive Exploration, Desktop PC, Different Levels Of Detail, News Portals, Spatial Information, User Study, 3D Space, Human-computer Interaction, Temporal Information, Third Dimension, Tablet Computer, Pie Chart, News Stories, 3D Visualization, Article Details, Visual Point, Bottom Of The Screen, Geospatial Data, Type Of Visualization, Largest Dataset, Tagging Location, Live Feed"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-9612019","time_end":"","time_stamp":"","time_start":"","title":"News Globe: Visualization of Geolocalized News Articles","uid":"v-cga-9612019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In many applications, developed deep-learning models need to be iteratively debugged and refined to improve the model efficiency over time. Debugging some models, such as temporal multilabel classification (TMLC) where each data point can simultaneously belong to multiple classes, can be especially more challenging due to the complexity of the analysis and instances that need to be reviewed. In this article, focusing on video activity recognition as an application of TMLC, we propose DETOXER, an interactive visual debugging system to support finding different error types and scopes through providing multiscope explanations.","accessible_pdf":false,"authors":[{"affiliations":"","email":"mahsannourani@ufl.edu","is_corresponding":true,"name":"Mahsan Nourani"},{"affiliations":"","email":"chiradeep.roy@utdallas.edu","is_corresponding":false,"name":"Chiradeep Roy"},{"affiliations":"","email":"dhoneycutt@ufl.edu","is_corresponding":false,"name":"Donald R. Honeycutt"},{"affiliations":"","email":"eragan@ufl.edu","is_corresponding":false,"name":"Eric D. Ragan"},{"affiliations":"","email":"vibhav.gogate@utdallas.edu","is_corresponding":false,"name":"Vibhav Gogate"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mahsan Nourani"],"doi":"10.1109/MCG.2022.3201465","external_paper_link":"","fno":"9866547","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Debugging, Analytical Models, Heating Systems, Data Models, Computational Modeling, Activity Recognition, Deep Learning, Multi Label Classification, Visualization Tool, Temporal Classification, Visual Debugging, False Positive, False Negative, Active Components, Deep Learning Models, Types Of Errors, Video Frames, Error Detection, Detection Of Types, Action Recognition, Interactive Visualization, Sequence Of Points, Design Goals, Positive Errors, Critical Outcomes, Error Patterns, Global Panel, False Negative Rate, False Positive Rate, Heatmap, Visual Approach, Truth Labels, True Positive, Confidence Score, Anomaly Detection, Interface Elements"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-9866547","time_end":"","time_stamp":"","time_start":"","title":"DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification","uid":"v-cga-9866547","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The Internet of Food (IoF) is an emerging field in smart foodsheds, involving the creation of a knowledge graph (KG) about the environment, agriculture, food, diet, and health. However, the heterogeneity and size of the KG present challenges for downstream tasks, such as information retrieval and interactive exploration. To address those challenges, we propose an interactive knowledge and learning environment (IKLE) that integrates three programming and modeling languages to support multiple downstream tasks in the analysis pipeline. To make IKLE easier to use, we have developed algorithms to automate the generation of each language. In addition, we collaborated with domain experts to design and develop a dataflow visualization system, which embeds the automatic language generations into components and allows users to build their analysis pipeline by dragging and connecting components of interest. We have demonstrated the effectiveness of IKLE through three real-world case studies in smart foodsheds.","accessible_pdf":false,"authors":[{"affiliations":"","email":"tu.253@osu.edu","is_corresponding":true,"name":"Yamei Tu"},{"affiliations":"","email":"wang.5502@osu.edu","is_corresponding":false,"name":"Xiaoqi Wang"},{"affiliations":"","email":"qiu.580@osu.edu","is_corresponding":false,"name":"Rui Qiu"},{"affiliations":"","email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"},{"affiliations":"","email":"mmmille6@wisc.edu","is_corresponding":false,"name":"Michelle Miller"},{"affiliations":"","email":"jinmeng.rao@wisc.edu","is_corresponding":false,"name":"Jinmeng Rao"},{"affiliations":"","email":"song.gao@wisc.edu","is_corresponding":false,"name":"Song Gao"},{"affiliations":"","email":"prhuber@ucdavis.edu","is_corresponding":false,"name":"Patrick R. Huber"},{"affiliations":"","email":"adhollander@ucdavis.edu","is_corresponding":false,"name":"Allan D. Hollander"},{"affiliations":"","email":"matthew@ic-foods.org","is_corresponding":false,"name":"Matthew Lange"},{"affiliations":"","email":"cgarcia@tacc.utexas.edu","is_corresponding":false,"name":"Christian R. Garcia"},{"affiliations":"","email":"jstubbs@tacc.utexas.edu","is_corresponding":false,"name":"Joe Stubbs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yamei Tu"],"doi":"10.1109/MCG.2023.3263960","external_paper_link":"","fno":"10091124","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Learning Environment, Interactive Learning Environments, Programming Language, Visual System, Analysis Pipeline, Patterns In Data, Flow Data, Human-computer Interaction, Food Systems, Information Retrieval, Domain Experts, Language Model, Automatic Generation, Interactive Exploration, Cyberinfrastructure, Pre-trained Language Models, Resource Description Framework, SPARQL Query, DBpedia, Entity Types, Data Visualization, Resilience Analysis, Load Data, Query Results, Supply Chain, Network Flow"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10091124","time_end":"","time_stamp":"","time_start":"","title":"An Interactive Knowledge and Learning Environment in Smart Foodsheds","uid":"v-cga-10091124","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Set visualization facilitates the exploration and analysis of set-type data. However, how sets should be visualized when the data are uncertain is still an open research challenge. To address the problem of depicting uncertainty in set visualization, we ask 1) which aspects of set type data can be affected by uncertainty and 2) which characteristics of uncertainty influence the visualization design. We answer these research questions by first describing a conceptual framework that brings together 1) the information that is primarily relevant in sets (i.e., set membership, set attributes, and element attributes) and 2) different plausible categories of (un)certainty (i.e., certainty, undefined uncertainty as a binary fact, and defined uncertainty as quantifiable measure). Following the structure of our framework, we systematically discuss basic visualization examples of integrating uncertainty in set visualizations. We draw on existing knowledge about general uncertainty visualization and previous evidence of its effectiveness.","accessible_pdf":false,"authors":[{"affiliations":"","email":"christian.tominski@uni-rostock.de","is_corresponding":false,"name":"Christian Tominski"},{"affiliations":"","email":"m.behrisch@uu.nl","is_corresponding":true,"name":"Michael Behrisch"},{"affiliations":"","email":"susanne.bleisch@fhnw.ch","is_corresponding":false,"name":"Susanne Bleisch"},{"affiliations":"","email":"sara.fabrikant@geo.uzh.ch","is_corresponding":false,"name":"Sara Irina Fabrikant"},{"affiliations":"","email":"eva.mayr@donau-uni.ac.at","is_corresponding":false,"name":"Eva Mayr"},{"affiliations":"","email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":"","email":"helen.purchase@monash.edu","is_corresponding":false,"name":"Helen Purchase"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Behrisch"],"doi":"10.1109/MCG.2023.3300441","external_paper_link":"","fno":"10198358","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Uncertainty, Data Visualization, Measurement Uncertainty, Visual Analytics, Terminology, Task Analysis, Surveys, Conceptual Framework, Cardinality, Data Visualization, Visual Representation, Measure Of The Amount, Set Membership, Intersection Set, Visual Design, Different Types Of Uncertainty, Missing Values, Visual Methods, Fuzzy Set, Age Of Students, Color Values, Uncertainty Values, Explicit Representation, Aggregate Value, Exact Information, Uncertain Information, Table Cells, Temporal Uncertainty, Uncertain Data, Representation Of Uncertainty, Implicit Representation, Spatial Uncertainty, Point Symbol, Visual Clutter, Color Hue, Graphical Elements, Uncertain Value"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10198358","time_end":"","time_stamp":"","time_start":"","title":"Visualizing Uncertainty in Sets","uid":"v-cga-10198358","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We report a study investigating the viability of using interactive visualizations to aid architectural design with building codes. While visualizations have been used to support general architectural design exploration, existing computational solutions treat building codes as separate from, rather than part of, the design process, creating challenges for architects. Through a series of participatory design studies with professional architects, we found that interactive visualizations have promising potential to aid design exploration and sensemaking in early stages of architectural design by providing feedback about potential allowances and consequences of design decisions. However, implementing a visualization system necessitates addressing the complexity and ambiguity inherent in building codes. To tackle these challenges, we propose various user-driven knowledge management mechanisms for integrating, negotiating, interpreting, and documenting building code rules.","accessible_pdf":false,"authors":[{"affiliations":"","email":"snowak@sfu.ca","is_corresponding":true,"name":"Stan Nowak"},{"affiliations":"","email":"bon.aseniero@autodesk.com","is_corresponding":false,"name":"Bon Adriel Aseniero"},{"affiliations":"","email":"lyn@sfu.ca","is_corresponding":false,"name":"Lyn Bartram"},{"affiliations":"","email":"tovi@dgp.toronto.edu","is_corresponding":false,"name":"Tovi Grossman"},{"affiliations":"","email":"George.fitzmaurice@autodesk.com","is_corresponding":false,"name":"George Fitzmaurice"},{"affiliations":"","email":"justin.matejka@autodesk.com","is_corresponding":false,"name":"Justin Matejka"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Stan Nowak"],"doi":"10.1109/MCG.2023.3307971","external_paper_link":"","fno":"10227838","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10227838","time_end":"","time_stamp":"","time_start":"","title":"Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes","uid":"v-cga-10227838","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Existing dynamic weighted graph visualization approaches rely on users\u2019 mental comparison to perceive temporal evolution of dynamic weighted graphs, hindering users from effectively analyzing changes across multiple timeslices. We propose DiffSeer, a novel approach for dynamic weighted graph visualization by explicitly visualizing the differences of graph structures (e.g., edge weight differences) between adjacent timeslices. Specifically, we present a novel nested matrix design that overviews the graph structure differences over a time period as well as shows graph structure details in the timeslices of user interest. By collectively considering the overall temporal evolution and structure details in each timeslice, an optimization-based node reordering strategy is developed to group nodes with similar evolution patterns and highlight interesting graph structure details in each timeslice. We conducted two case studies on real-world graph datasets and in-depth interviews with 12 target users to evaluate DiffSeer. The results demonstrate its effectiveness in visualizing dynamic weighted graphs.","accessible_pdf":false,"authors":[{"affiliations":"","email":"wenxiaolin@stu.scu.edu.cn","is_corresponding":false,"name":"Xiaolin Wen"},{"affiliations":"","email":"yongwang@smu.edu.sg","is_corresponding":true,"name":"Yong Wang"},{"affiliations":"","email":"wumeixuan@stu.scu.edu.cn","is_corresponding":false,"name":"Meixuan Wu"},{"affiliations":"","email":"wangfengjie@stu.scu.edu.cn","is_corresponding":false,"name":"Fengjie Wang"},{"affiliations":"","email":"xuanwu.yue@connect.ust.hk","is_corresponding":false,"name":"Xuanwu Yue"},{"affiliations":"","email":"shenqm@sustech.edu.cn","is_corresponding":false,"name":"Qiaomu Shen"},{"affiliations":"","email":"mayx@sustech.edu.cn","is_corresponding":false,"name":"Yuxin Ma"},{"affiliations":"","email":"zhumin@scu.edu.cn","is_corresponding":false,"name":"Min Zhu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yong Wang"],"doi":"10.1109/MCG.2023.3248289","external_paper_link":"","fno":"10078374","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visibility Graph, Spatial Patterns, Weight Change, In-depth Interviews, Temporal Changes, Temporal Evolution, Negative Changes, Interesting Patterns, Edge Weights, Real-world Datasets, Graph Structure, Visual Approach, Dynamic Visualization, Dynamic Graph, Financial Networks, Graph Datasets, Similar Evolutionary Patterns, User Interviews, Similar Changes, Chinese New Year, Sector Indices, Original Graph, Red Rectangle, Nodes In Order, Stock Market Crash, Stacked Bar Charts, Different Types Of Matrices, Chinese New, Blue Rectangle"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10078374","time_end":"","time_stamp":"","time_start":"","title":"DiffSeer: Difference-Based Dynamic Weighted Graph Visualization","uid":"v-cga-10078374","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Some 15 years ago, Visualization Viewpoints published an influential article titled Rainbow Color Map (Still) Considered Harmful (Borland and Taylor, 2007). The paper argued that the \u201crainbow colormap\u2019s characteristics of confusing the viewer, obscuring the data and actively misleading interpretation make it a poor choice for visualization.\u201d Subsequent articles often repeat and extend these arguments, so much so that avoiding rainbow colormaps, along with their derivatives, has become dogma in the visualization community. Despite this loud and persistent recommendation, scientists continue to use rainbow colormaps. Have we failed to communicate our message, or do rainbow colormaps offer advantages that have not been fully appreciated? We argue that rainbow colormaps have properties that are underappreciated by existing design conventions. We explore key critiques of the rainbow in the context of recent research to understand where and how rainbows might be misunderstood. Choosing a colormap is a complex task, and rainbow colormaps can be useful for selected applications.","accessible_pdf":false,"authors":[{"affiliations":"","email":"cware@ccom.unh.edu","is_corresponding":false,"name":"Colin Ware"},{"affiliations":"","email":"mstone@acm.org","is_corresponding":true,"name":"Maureen Stone"},{"affiliations":"","email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maureen Stone"],"doi":"10.1109/MCG.2023.3246111","external_paper_link":"","fno":"10128890","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Image Color Analysis, Semantics, Data Visualization, Estimation, Reliability Engineering"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10128890","time_end":"","time_stamp":"","time_start":"","title":"Rainbow Colormaps Are Not All Bad","uid":"v-cga-10128890","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The membership function is to categorize quantities along with a confidence degree. This article investigates a generic user interaction based on this function for categorizing various types of quantities without modification, which empowers users to articulate uncertainty categorization and enhance their visual data analysis significantly. We present the technique design and an online prototype, supplementing with insights from three case studies that highlight the technique\u2019s efficacy among different types of quantities. Furthermore, we conduct a formal user study to scrutinize the process and reasoning users employ while utilizing our technique. The findings indicate that our technique can help users create customized categories. Both our code and the interactive prototype are made available as open-source resources, intended for application across varied domains as a generic tool.","accessible_pdf":false,"authors":[{"affiliations":"","email":"liuliqun.cs@gmail.com","is_corresponding":true,"name":"Liqun Liu"},{"affiliations":"","email":"romain.vuillemot@ec-lyon.fr","is_corresponding":false,"name":"Romain Vuillemot"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Liqun Liu"],"doi":"10.1109/MCG.2023.3301449","external_paper_link":"","fno":"10207831","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Uncertainty, Prototypes, Fuzzy Logic, Image Color Analysis, Fuzzy Sets, Open Source Software, General Function, Membership Function, User Study, Classification Process, Fuzzy Logic, Quantitative Values, Visualization Techniques, Amount Of Type, Fuzzy Theory, General Interaction, Temperature Dataset, Interaction Techniques, Carbon Dioxide, Computation Time, Rule Based, Web Page, Real World Scenarios, Fuzzy Set, Domain Experts, Supercritical CO 2, Parallel Coordinates, Fuzzy System, Fuzzy Clustering, Interactive Visualization, Amount Of Items, Large Scale Problems"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10207831","time_end":"","time_stamp":"","time_start":"","title":"A Generic Interactive Membership Function for Categorization of Quantities","uid":"v-cga-10207831","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Although visualizations are a useful tool for helping people to understand information, they can also have unintended effects on human cognition. This is especially true for uncertain information, which is difficult for people to understand. Prior work has found that different methods of visualizing uncertain information can produce different patterns of decision making from users. However, uncertainty can also be represented via text or numerical information, and few studies have systematically compared these types of representations to visualizations of uncertainty. We present two experiments that compared visual representations of risk (icon arrays) to numerical representations (natural frequencies) in a wildfire evacuation task. Like prior studies, we found that different types of visual cues led to different patterns of decision making. In addition, our comparison of visual and numerical representations of risk found that people were more likely to evacuate when they saw visualizations than when they saw numerical representations. These experiments reinforce the idea that design choices are not neutral: seemingly minor differences in how information is represented can have important impacts on human risk perception and decision making.","accessible_pdf":false,"authors":[{"affiliations":"","email":"lematze@sandia.gov","is_corresponding":true,"name":"Laura E. Matzen"},{"affiliations":"","email":"bchowel@sandia.gov","is_corresponding":false,"name":"Breannan C. Howell"},{"affiliations":"","email":"mctrumb@sandia.gov","is_corresponding":false,"name":"Michael C. S. Trumbo"},{"affiliations":"","email":"kmdivis@sandia.gov","is_corresponding":false,"name":"Kristin M. Divis"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Laura E. Matzen"],"doi":"10.1109/MCG.2023.3299875","external_paper_link":"","fno":"10201383","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, Uncertainty, Decision Making, Costs, Task Analysis, Laboratories, Information Analysis, Decision Making, Visual Representation, Numerical Representation, Decision Patterns, Deterministic, Risk Perception, Specific Information, Fundamental Frequency, Point Values, Representation Of Information, Risk Information, Visual Conditions, Numerous Conditions, Human Decision, Numerical Information, Impact Of Different Types, Uncertain Information, Type Of Visualization, Differences In Risk Perception, Representation Of Uncertainty, Increase In Participation, Participants In Experiment, Individual Difference Measures, Sandia National Laboratories, Risk Propensity, Bonus Payments, Average Response Time, Difference In Probability, Response Time"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10201383","time_end":"","time_stamp":"","time_start":"","title":"Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making","uid":"v-cga-10201383","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Traditional approaches to data visualization have often focused on comparing different subsets of data, and this is reflected in the many techniques developed and evaluated over the years for visual comparison. Similarly, common workflows for exploratory visualization are built upon the idea of users interactively applying various filter and grouping mechanisms in search of new insights. This paradigm has proven effective at helping users identify correlations between variables that can inform thinking and decision-making. However, recent studies show that consumers of visualizations often draw causal conclusions even when not supported by the data. Motivated by these observations, this article highlights recent advances from a growing community of researchers exploring methods that aim to directly support visual causal inference. However, many of these approaches have their own limitations, which limit their use in many real-world scenarios. This article, therefore, also outlines a set of key open challenges and corresponding priorities for new research to advance the state of the art in visual causal inference.","accessible_pdf":false,"authors":[{"affiliations":"","email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":"","email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":"","email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arran Zeyu Wan"],"doi":"10.1109/MCG.2023.3338788","external_paper_link":"","fno":"10414267","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Analytical Models, Correlation, Visual Analytics, Decision Making, Data Visualization, Reliability Theory, Cognition, Inference Algorithms, Causal Inference, Causality, Social Media, Exploratory Analysis, Data Visualization, Visual Representation, Visual Analysis, Visualization Tool, Open Challenges, Interactive Visualization, Assembly Line, Different Subsets Of Data, Visual Analytics Tool, Data Driven Decision Making, Data Quality, Statistical Models, Causal Effect, Visual System, Use Of Social Media, Bar Charts, Causal Model, Causal Graph, Chart Types, Directed Acyclic Graph, Visual Design, Portion Of The Dataset, Causal Structure, Prior Section, Causal Explanations, Line Graph"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10414267","time_end":"","time_stamp":"","time_start":"","title":"Using Counterfactuals to Improve Causal Inferences From Visualizations","uid":"v-cga-10414267","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Recent developments in artificial intelligence (AI) and machine learning (ML) have led to the creation of powerful generative AI methods and tools capable of producing text, code, images, and other media in response to user prompts. Significant interest in the technology has led to speculation about what fields, including visualization, can be augmented or replaced by such approaches. However, there remains a lack of understanding about which visualization activities may be particularly suitable for the application of generative AI. Drawing on examples from the field, we map current and emerging capabilities of generative AI across the different phases of the visualization lifecycle and describe salient opportunities and challenges.","accessible_pdf":false,"authors":[{"affiliations":"","email":"rahul.basole@accenture.com","is_corresponding":false,"name":"Rahul C. Basole"},{"affiliations":"","email":"timothy.major@accenture.com","is_corresponding":true,"name":"Timothy Major"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Timothy Major"],"doi":"10.1109/MCG.2024.3362168","external_paper_link":"","fno":"10478355","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Generative AI, Art, Artificial Intelligence, Machine Learning, Visualization, Media, Augmented Reality, Machine Learning, Visual Representation, Professional Knowledge, Creative Process, Domain Experts, Generalization Capability, Development Of Artificial Intelligence, Artificial Intelligence Capabilities, Iterative Process, Natural Language, Commercial Software, Hallucinations, Team Sports, Design Requirements, Intelligence Agencies, Recommender Systems, User Requirements, Iterative Design, Use Of Artificial Intelligence, Visual Design, Phase Assemblage, Data Literacy"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10478355","time_end":"","time_stamp":"","time_start":"","title":"Generative AI for Visualization: Opportunities and Challenges","uid":"v-cga-10478355","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"CG&A","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"v-full":{"event":"VIS Full Papers","event_description":"","event_prefix":"v-full","event_type":"full","event_url":"","long_name":"VIS Full Papers","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"v-full","ff_link":"","session_id":"full0","session_image":"full0.png","time_end":"","time_slots":[{"abstract":"We present a visual analytics approach for multi-level visual exploration of users\u2019 interaction strategies in an interactive digital environment. The use of interactive touchscreen exhibits in informal learning environments, such as museums and science centers, often incorporate frameworks that classify learning processes, such as Bloom\u2019s taxonomy, to achieve better user engagement and knowledge transfer. To analyze user behavior within these digital environments, interaction logs are recorded to capture diverse exploration strategies. However, analysis of such logs is challenging, especially in terms of coupling interactions and cognitive learning processes, and existing work within learning and educational contexts remains limited. To address these gaps, we develop a visual analytics approach for analyzing interaction logs that supports exploration at the individual user level and multi-user comparison. The approach utilizes algorithmic methods to identify similarities in users' interactions and reveal their exploration strategies. We motivate and illustrate our approach through an application scenario, using event sequences derived from interaction log data in an experimental study conducted with science center visitors from diverse backgrounds and demographics. The study involves 14 users completing tasks of increasing complexity, designed to stimulate different levels of cognitive learning processes. We implement our approach in an interactive visual analytics prototype system, named VISID, and together with domain experts, discover a set of task-solving exploration strategies, such as \"cascading\" and \"nested-loop\", which reflect different levels of learning processes from Bloom's taxonomy. Finally, we discuss the generalizability and scalability of the presented system and the need for further research with data acquired in the wild.","accessible_pdf":false,"authors":[{"affiliations":["Media and Information Technology, Norrk\u00f6ping, Sweden"],"email":"peilin.yu@liu.se","is_corresponding":true,"name":"Peilin Yu"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"aida.vitoria@liu.se","is_corresponding":false,"name":"Aida Nordman"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"marta.koc-januchta@liu.se","is_corresponding":false,"name":"Marta M. Koc-Januchta"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":false,"name":"Lonni Besan\u00e7on"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"katerina.vrotsou@liu.se","is_corresponding":false,"name":"Katerina Vrotsou"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Peilin Yu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1026","time_end":"","time_stamp":"","time_start":"","title":"Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment","uid":"v-full-1026","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In soccer, player scouting aims to find players suitable for a team to increase the winning chance in future matches. To scout suitable players, coaches and analysts need to consider various complicated factors, such as the players' performance in the tactics of a new team, which is hard to learn directly from their historical performance. Match simulation methods have been introduced to scout players by estimating their expected contributions to a new team. However, they usually focus on the simulation of match results and hardly support interactive analysis to navigate potential target players and compare them in fine-grained simulated behaviors. In this work, we propose a visual analytics method to assist soccer player scouting based on match simulation. We construct a two-level match simulation framework for estimating both match results and player behaviors when a player comes to a new team. Based on the framework, we develop a visual analytics system, Team-Scouter, to facilitate the simulative-based soccer player scouting process through player navigation, comparison, and explanation. With our system, coaches and analysts can find potential players suitable for the team and compare them on historical and expected performances. To explain the players' expected performances, the system provides a visual comparison between the simulated behaviors of the player and the actual ones. The usefulness and effectiveness of the system are demonstrated by two case studies on a real-world dataset and an expert interview.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"caoanqi28@163.com","is_corresponding":true,"name":"Anqi Cao"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"xxie@zju.edu.cn","is_corresponding":false,"name":"Xiao Xie"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"2366385033@qq.com","is_corresponding":false,"name":"Runjin Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"1282533692@qq.com","is_corresponding":false,"name":"Yuxin Tian"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"fanmu_032@zju.edu.cn","is_corresponding":false,"name":"Mu Fan"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhang_hui@zju.edu.cn","is_corresponding":false,"name":"Hui Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anqi Cao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1031","time_end":"","time_stamp":"","time_start":"","title":"Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting","uid":"v-full-1031","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Dynamic topic modeling is useful at discovering the development and change in latent topics over time. However, present methodology relies on algorithms that separate document and word representations. This prevents the creation of a meaningful embedding space where changes in word usage and documents can be directly analyzed in a temporal context. This paper proposes an expansion of the compass-aligned temporal Word2Vec methodology into dynamic topic modeling. Such a method allows for the direct comparison of word and document embeddings across time in dynamic topics. This enables the creation of visualizations that incorporate diachronic word embeddings within the context of documents into topic visualizations. In experiments against the current state-of-the-art, our proposed method demonstrates overall competitive performance in topic relevancy and diversity across temporal datasets of varying size. Simultaneously, it provides insightful visualizations focused on temporal word embeddings while maintaining the insights provided by global topic evolution, advancing our understanding of how topics evolve over time.","accessible_pdf":false,"authors":[{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"d4n1elp@vt.edu","is_corresponding":true,"name":"Daniel Palamarchuk"},{"affiliations":["Virginia Polytechnic Institute of Technology , Blacksburg, United States"],"email":"lemaraw@vt.edu","is_corresponding":false,"name":"Lemara Williams"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"bmayer@cs.vt.edu","is_corresponding":false,"name":"Brian Mayer"},{"affiliations":["Savannah River National Laboratory, Aiken, United States"],"email":"thomas.danielson@srnl.doe.gov","is_corresponding":false,"name":"Thomas Danielson"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":false,"name":"Rebecca Faust"},{"affiliations":["Savannah River National Laboratory, Aiken, United States"],"email":"larry.deschaine@srnl.doe.gov","is_corresponding":false,"name":"Larry M Deschaine PhD"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Palamarchuk"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1032","time_end":"","time_stamp":"","time_start":"","title":"Visualizing Temporal Topic Embeddings with a Compass","uid":"v-full-1032","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Propagation analysis refers to studying how information spreads on social media, a pivotal endeavor for understanding social sentiment and public opinions. Numerous studies contribute to visualizing information spread, but few have considered the implicit and complex diffusion patterns among multiple platforms. To bridge the gap, we collaborated with professionals to discover crucial factors that dissect the mechanism of cross-platform information spread. Based on that, we propose an information diffusion model that estimates the likelihood of a topic/post spreading among different social media platforms. Moreover, we propose a novel visual metaphor that encapsulates cross-platform patterns in a manner analogous to the spread of seeds across gardens. Specifically, we visualize social platforms, posts, implicit cross-platform routes, and salient instances as elements of a virtual ecosystem \u2014 gardens, flowers, winds, and seeds, respectively. We further develop a visual analytic system, namely BloomWind, that enables users to quickly identify the cross-platform diffusion patterns and investigate the relevant social media posts. Ultimately, we demonstrate the usage of BloomWind through two case studies and validate its effectiveness using expert interviews.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"940662579@qq.com","is_corresponding":true,"name":"Jianing Yin"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"hzjia@zju.edu.cn","is_corresponding":false,"name":"Hanze Jia"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhoubuwei@zju.edu.cn","is_corresponding":false,"name":"Buwei Zhou"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"tangtan@zju.edu.cn","is_corresponding":false,"name":"Tan Tang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"yingluu@zju.edu.cn","is_corresponding":false,"name":"Lu Ying"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"sn_ye@zju.edu.cn","is_corresponding":false,"name":"Shuainan Ye"},{"affiliations":["Michigan State University, East Lansing, United States"],"email":"pengtaiq@msu.edu","is_corresponding":false,"name":"Tai-Quan Peng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jianing Yin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1039","time_end":"","time_stamp":"","time_start":"","title":"Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts","uid":"v-full-1039","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"When treating Head and Neck cancer patients, oncologists have to navigate a complicated series of treatment decisions for each patient. The relationship between each treatment decision and the potential tradeoff of tumor control and toxicity risk is poorly understood, leaving oncologists to largely rely on institutional knowledge and general guidelines that do not take into account specific patient circumstances. Evaluating these risks relies on a complicated understanding of several different factors such as patient health, spatial tumor spread and treatment side effect risk that can not be captured through simple heuristics. To support clinicians in better understanding tradeoffs when deciding on treatment courses, we developed DITTO, a digital-twin and visual computing system that allows clinicians to analyze nuanced patient risk for each patient and decide on an optimal treatment plan. DITTO relies on a sequential Deep Reinforcement Learning (DRL) system to deliver personalized risk of both long-term and short-term disease outcome and toxicity risk for HNC patients. Based on a participatory collaborative design alongside oncologists, we also implement several explainability methods to support clinical trust and encourage healthy skepticism when using our models. We evaluate the efficacy of our model through quantitative evaluation of model performance and case studies with qualitative feedback. Finally, we discuss design lessons for developing clinical visual XAI applications for clinical end users.","accessible_pdf":false,"authors":[{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"awentze2@uic.edu","is_corresponding":true,"name":"Andrew Wentzel"},{"affiliations":["University of Houston, Houston, United States"],"email":"skattia@mdanderson.org","is_corresponding":false,"name":"Serageldin Attia"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"zhangz@uic.edu","is_corresponding":false,"name":"Xinhua Zhang"},{"affiliations":["University of Iowa, Iowa City, United States"],"email":"guadalupe-canahuate@uiowa.edu","is_corresponding":false,"name":"Guadalupe Canahuate"},{"affiliations":["University of Texas, Houston, United States"],"email":"cdfuller@mdanderson.org","is_corresponding":false,"name":"Clifton David Fuller"},{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"g.elisabeta.marai@gmail.com","is_corresponding":false,"name":"G. Elisabeta Marai"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Andrew Wentzel"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1059","time_end":"","time_stamp":"","time_start":"","title":"DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer","uid":"v-full-1059","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"There is increased interest in understanding the interplay between text and visuals in the field of data visualization. However, this attention has predominantly been on the use of text in standalone visualizations (such as text annotation overlays) or augmenting text stories supported by a series of independent views. In this paper, we shift from the traditional focus on single-chart annotations to characterize the nuanced but crucial communication role of text in the complex environment of interactive dashboards. Through a survey and analysis of 190 dashboards in the wild, plus 13 expert interview sessions with experienced dashboard authors, we highlight the distinctive nature of text as an integral component of the dashboard experience, while delving into the categories, semantic levels, and functional roles of text, and exploring how these text elements are coalesced by dashboard authors to guide and inform dashboard users. Our contributions are threefold. First, we distill qualitative and quantitative findings from our studies to characterize current practices of text use in dashboards, including a categorization of text-based components and design patterns. Second, we leverage current practices and existing literature to propose, discuss, and validate recommended practices for text in dashboards, embodied as a set of 12 heuristics that underscore the semantic and functional role of text in offering navigational cues, contextualizing data insights, supporting reading order, among other concerns. Third, we reflect on our findings plus existing literature to identify gaps and propose opportunities for data visualization researchers to push the boundaries on text usage for dashboards, from authoring support and interactivity to text generation and content personalization. Our research underscores the significance of elevating text as a first-class citizen in data visualization, and the need to support the inclusion of textual components and their interactive affordances in dashboard design.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"nicole.sultanum@gmail.com","is_corresponding":true,"name":"Nicole Sultanum"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicole Sultanum"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1060","time_end":"","time_stamp":"","time_start":"","title":"From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards","uid":"v-full-1060","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"While previous work has found success in deploying visualizations as museum exhibits, it has not investigated whether museum context impacts visitor behaviour with these exhibits. We present an interactive Deep-time Literacy Visualization Exhibit (DeLVE) to help museum visitors understand deep time (lengths of extremely long geological processes) by improving proportional reasoning skills through comparison of different time periods. DeLVE uses a new visualization idiom, Connected Multi-Tier Ranges, to visualize curated datasets of past events across multiple scales of time, relating extreme scales with concrete scales that have more familiar magnitudes and units. Museum staff at three separate museums approved the deployment of DeLVE as a digital kiosk, and devoted time to curating a unique dataset in each of them. We collect data from two sources, an observational study and system trace logs. We discuss the importance of context: similar museum exhibits in different contexts were received very differently by visitors. We additionally discuss differences in our process from Sedlmair et al.'s design study methodology which is focused on design studies triggered by connection with collaborators rather than the discovery of a concept to communicate. Supplemental materials are available at: https://osf.io/z53dq/?view_only=4df33aad207144aca149982412125541","accessible_pdf":false,"authors":[{"affiliations":["The University of British Columbia, Vancouver, Canada"],"email":"marasolen@gmail.com","is_corresponding":true,"name":"Mara Solen"},{"affiliations":["University of British Columbia , Vancouver, Canada"],"email":"sultananigar70@gmail.com","is_corresponding":false,"name":"Nigar Sultana"},{"affiliations":["University of British Columbia, Vancouver, Canada"],"email":"laura.lukes@ubc.ca","is_corresponding":false,"name":"Laura A. Lukes"},{"affiliations":["University of British Columbia, Vancouver, Canada"],"email":"tmm@cs.ubc.ca","is_corresponding":false,"name":"Tamara Munzner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mara Solen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1063","time_end":"","time_stamp":"","time_start":"","title":"DeLVE into Earth\u2019s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts","uid":"v-full-1063","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Large Language Models (LLMs), such as ChatGPT and Llama, have revolutionized various domains through their impressive natural language processing capabilities. However, their deployment raises significant ethical and security concerns, including their potential misuse for generating fake news or aiding illegal activities. Thus, ensuring the development of secure and trustworthy LLMs is crucial. Traditional red teaming approaches for identifying vulnerabilities in AI models are limited by their reliance on manual prompt construction and expertise. This paper introduces a novel visual analytics system, AdversaFlow, designed to enhance the security of LLMs against adversarial attacks through human-AI collaboration. Our system, which involves adversarial training between a target model and a red model, is equipped with a unique multi-level adversarial flow visualization and a fluctuation path visualization technique. These features provide a detailed insight into the adversarial dynamics and the robustness of LLMs, thereby enabling AI security experts to identify and mitigate vulnerabilities effectively. We deliver quantitative evaluations for the models and present case studies that validate the utility of our system and share insights for future AI security solutions. Our contributions include a human-AI collaboration framework for LLM red teaming, a comprehensive visual analytics system to support adversarial pattern presentation and fluctuation analysis, and valuable lessons learned in visual analytics for AI security.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Ningbo, China"],"email":"dengdazhen@outlook.com","is_corresponding":true,"name":"Dazhen Deng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhangchuhan024@163.com","is_corresponding":false,"name":"Chuhan Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"huawzheng@gmail.com","is_corresponding":false,"name":"Huawei Zheng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"yw.pu@zju.edu.cn","is_corresponding":false,"name":"Yuwen Pu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"sji@zju.edu.cn","is_corresponding":false,"name":"Shouling Ji"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dazhen Deng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1067","time_end":"","time_stamp":"","time_start":"","title":"AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow","uid":"v-full-1067","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A growing body of work draws on feminist thinking to challenge assumptions about how people engage with and use visualizations. This work draws on feminist values, driving design and research guidelines that account for the influences of power and neglect. This prior work is largely prescriptive, however, forgoing articulation of how feminist theories of knowledge \u2014 or feminist epistemology \u2014 can alter research design and outcomes. At the core of our work is an engagement with feminist epistemology, drawing attention to how a new framework for how we know what we know enabled us to overcome intellectual tensions in our research. Specifically, we focus on the theoretical concept of entanglement, central to recent feminist scholarship, and contribute: a history of entanglement in the broader scope of feminist theory; an articulation of the main points of entanglement theory for a visualization context; and a case study of research outcomes as evidence of the potential of feminist epistemology to impact visualization research. This work answers a call in the community to embrace a broader set of theoretical and epistemic foundations and provides a starting point for bringing different theories into visualization research.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"derya.akbaba@liu.se","is_corresponding":true,"name":"Derya Akbaba"},{"affiliations":["Emory University, Atlanta, United States"],"email":"lauren.klein@emory.edu","is_corresponding":false,"name":"Lauren Klein"},{"affiliations":["Link\u00f6ping University, N\u00f6rrkoping, Sweden"],"email":"miriah.meyer@liu.se","is_corresponding":false,"name":"Miriah Meyer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Derya Akbaba"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1077","time_end":"","time_stamp":"","time_start":"","title":"Entanglements for Visualization: Changing Research Outcomes through Feminist Theory","uid":"v-full-1077","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Large Language Models (LLMs) have shown great potential in intelligent visualization systems, especially for domain-specific applications. Integrating LLMs into visualization systems presents challenges, and we categorize these challenges into three alignments: domain problems with LLMs, visualization with LLMs, and interaction with LLMs. To achieve these alignments, we propose a framework and outline a workflow to guide the application of fine-tuned LLMs to enhance visual interactions for domain-specific tasks. These alignment challenges are critical in education as they call for an intelligent visualization system to support beginners' self-regulated learning. Therefore, we apply the framework to education and introduce Tailor-Mind, an interactive visualization system designed to facilitate self-regulated learning for artificial intelligence beginners. Drawing on insights from a preliminary study, we identify self-regulated learning tasks and fine-tuning objectives to guide visualization design and tuning data construction. Our focus on aligning visualization with fine-tuned LLM makes Tailor-Mind more like a personalized tutor. Tailor-Mind also supports interactive recommendations to help beginners better achieve their learning goals. Model performance evaluations and user studies confirm that Tailor-Mind improves the self-regulated learning experience, effectively validating the proposed framework.","accessible_pdf":false,"authors":[{"affiliations":["Fudan University, Shanghai, China"],"email":"lgao.lynne@gmail.com","is_corresponding":true,"name":"Lin Gao"},{"affiliations":["Fudan University, ShangHai, China"],"email":"kingluther6666@gmail.com","is_corresponding":false,"name":"Jing Lu"},{"affiliations":["Fudan University, Shanghai, China"],"email":"gemini25szk@gmail.com","is_corresponding":false,"name":"Zekai Shao"},{"affiliations":["Fudan University, Shanghai, China"],"email":"ziyuelin917@gmail.com","is_corresponding":false,"name":"Ziyue Lin"},{"affiliations":["Fudan unversity, ShangHai, China"],"email":"sbyue23@m.fudan.edu.cn","is_corresponding":false,"name":"Shengbin Yue"},{"affiliations":["Fudan University, Shanghai, China"],"email":"chiokit0819@gmail.com","is_corresponding":false,"name":"Chiokit Ieong"},{"affiliations":["Fudan University, Shanghai, China"],"email":"21307130094@m.fudan.edu.cn","is_corresponding":false,"name":"Yi Sun"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"rory.james.zauner@univie.ac.at","is_corresponding":false,"name":"Rory Zauner"},{"affiliations":["Fudan University, Shanghai, China"],"email":"zywei@fudan.edu.cn","is_corresponding":false,"name":"Zhongyu Wei"},{"affiliations":["Fudan University, Shanghai, China"],"email":"simingchen3@gmail.com","is_corresponding":false,"name":"Siming Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lin Gao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1096","time_end":"","time_stamp":"","time_start":"","title":"Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education","uid":"v-full-1096","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Tactics play an important role in team sports by guiding how players interact on the field. Both sports fans and experts have a demand for analyzing sports tactics. Existing approaches allow users to visually perceive the multivariate tactical effects. However, these approaches usually consider each tactic as a whole, making it difficult for users to connect the complex interactions inside each tactic to the final tactical effect. In this work, we collaborate with basketball experts and propose a progressive approach to help users gain a deeper understanding of how each tactic works and customize tactics on demand. Users can progressively sketch on a tactic board, and a coach agent will simulate the possible actions in each step and present the simulation to users with facet visualizations. We develop an extensible framework that integrates large language models (LLMs) and visualizations to help users communicate with the coach agent with multimodal inputs. Based on the framework, we design and develop Smartboard, an agent-based interactive visualization system for fine-grained tactical analysis. Smartboard provides users with a structured process of setup, simulation, and evolution, allowing for iterative exploration of tactics based on specific personalized scenarios. We conduct case studies based on real-world basketball datasets to demonstrate the usefulness of our system.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ziao_liu@outlook.com","is_corresponding":true,"name":"Ziao Liu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"xxie@zju.edu.cn","is_corresponding":false,"name":"Xiao Xie"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"3170101799@zju.edu.cn","is_corresponding":false,"name":"Moqi He"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhao_ws@zju.edu.cn","is_corresponding":false,"name":"Wenshuo Zhao"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"wuyihong0606@gmail.com","is_corresponding":false,"name":"Yihong Wu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"lycheecheng@zju.edu.cn","is_corresponding":false,"name":"Liqi Cheng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhang_hui@zju.edu.cn","is_corresponding":false,"name":"Hui Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ziao Liu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1099","time_end":"","time_stamp":"","time_start":"","title":"Smartboard: Visual Exploration of Team Tactics with LLM Agent","uid":"v-full-1099","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"\u201cCorrelation does not imply causation\u201d is a famous mantra in statistical and visual analysis. However, consumers of visualizations often draw causal conclusions when only correlations between variables are shown. In this paper, we investigate factors that contribute to causal relationships users perceive in visualizations. We collected a corpus of concept pairs from variables in widely used datasets and created visualizations that depict varying correlative associations using three typical statistical chart types. We conducted two MTurk studies on (1) preconceived notions on causal relations without charts, and (2) perceived causal relations with charts, for each concept pair. Our results indicate that people make assumptions about causal relationships between pairs of concepts even without seeing any visualized data. Moreover, our results suggest that these assumptions constitute causal priors that, in combination with chart type and visualized association, impact how data visualizations are interpreted. The results also suggest that causal priors may lead to over- or under-estimation in perceived causal relations in different circumstances, and that those priors can also impact users\u2019 confidence in their causal assessments. Using data from the studies, we develop a model to capture the interaction between causal priors and visualized associations as they combine to impact a user\u2019s perceived causal relations. In addition to reporting the study results and analyses, we provide an open dataset of causal priors for 56 specific concept pairs that can serve as a potential benchmark for future studies. We also suggest heuristic-based guidelines to help designers improve visualization design choices to better support visual causal inference.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":true,"name":"Arran Zeyu Wang"},{"affiliations":["UNC-Chapel Hill, Chapel Hill, United States"],"email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":["Davidson College, Davidson, United States"],"email":"tapeck@davidson.edu","is_corresponding":false,"name":"Tabitha C. Peck"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"vaapad@live.unc.edu","is_corresponding":false,"name":"Wenyuan Wang"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arran Zeyu Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1100","time_end":"","time_stamp":"","time_start":"","title":"Causal Priors and Their Influence on Judgements of Causality in Visualized Data","uid":"v-full-1100","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.","accessible_pdf":false,"authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jykim@hcil.snu.ac.kr","is_corresponding":true,"name":"Jaeyoung Kim"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"sihyeon@hcil.snu.ac.kr","is_corresponding":false,"name":"Sihyeon Lee"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"hj@hcil.snu.ac.kr","is_corresponding":false,"name":"Hyeon Jeon"},{"affiliations":["Korea University Guro Hospital, Seoul, Korea, Republic of"],"email":"gooday19@gmail.com","is_corresponding":false,"name":"Keon-Joo Lee"},{"affiliations":["Hankuk University of Foreign Studies, Yongin-si, Korea, Republic of"],"email":"bkim@hufs.ac.kr","is_corresponding":false,"name":"Bohyoung Kim"},{"affiliations":["Seoul National University Bundang Hospital, Seongnam, Korea, Republic of"],"email":"braindoc@snu.ac.kr","is_corresponding":false,"name":"HEE JOON"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jaeyoung Kim"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1121","time_end":"","time_stamp":"","time_start":"","title":"PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets","uid":"v-full-1121","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Citations allow quickly identifying related research. If multiple publications are selected as seeds, specific suggestions for related literature can be made based on the number of incoming and outgoing citation links to this selection. Interactively adding recommended publications to the selection refines the next suggestion and incrementally builds a relevant collection of publications. Following this approach, the paper presents a search and foraging approach, PUREsuggest, which combines citation-based suggestions with augmented visualizations of the citation network. The focus and novelty of the approach is, first, the transparency of how the rankings are explained visually and, second, that the process can be steered through user-defined keywords, which reflect topics of interests. The system can be used to build new literature collections, to update and assess existing ones, as well as to use the collected literature for identifying relevant experts in the field. We evaluated the recommendation approach through simulated sessions and performed a user study investigating search strategies and usage patterns supported by the interface.","accessible_pdf":false,"authors":[{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"fabian.beck@uni-bamberg.de","is_corresponding":true,"name":"Fabian Beck"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fabian Beck"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1128","time_end":"","time_stamp":"","time_start":"","title":"PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings","uid":"v-full-1128","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Inspired by recent advances in digital fabrication, artists and scientists have demonstrated that physical data encodings (i.e., data physicalizations) can increase engagement with data, foster collaboration, and in some cases, improve data legibility and analysis relative to digital alternatives. However, prior empirical studies have only investigated abstract data encoded in physical form (e.g., laser cut bar charts) and not continuously sampled spatial data fields relevant to climate and medical science (e.g., heights, temperatures, densities, and velocities sampled on a spatial grid). This paper presents the design and results of the first study to characterize human performance in 3D spatial data analysis tasks across analogous physical and digital visualizations. Participants analyzed continuous spatial elevation data with three visualization modalities: (1) 2D digital visualization; (2) perspective-tracked, stereoscopic \"fishtank\" virtual reality; and (3) 3D printed data physicalization. Their tasks included tracing paths downhill, looking up spatial locations and comparing their relative heights, and identifying and reporting the minimum and maximum heights within certain spatial regions. As hypothesized, in most cases, participants performed the tasks just as well or better in the physical modality (based on time and error metrics). Additional results include an analysis of open-ended feedback from participants and discussion of implications for further research on the value of data physicalization. All data and supplemental materials are available at https://osf.io/7xdq4/?view_only=7416f8cfca85473889456fb69527abbc","accessible_pdf":false,"authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"bridger.g.herman@gmail.com","is_corresponding":true,"name":"Bridger Herman"},{"affiliations":["Beth Israel Deaconess Medical Center, Boston, United States"],"email":"cdjackso@bidmc.harvard.edu","is_corresponding":false,"name":"Cullen D. Jackson"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"dfk@umn.edu","is_corresponding":false,"name":"Daniel F. Keefe"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Bridger Herman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1137","time_end":"","time_stamp":"","time_start":"","title":"Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks","uid":"v-full-1137","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Written language is a useful mode for non-visual creative activities like writing essays and planning searches. This paper investigates the integration of written language into the visualization design process. We call this idea a `written rudder,' , since it acts as a guiding force or strategy for the design. Via an interview study of 24 working visualization designers, we first established that only a minority of participants systematically use written rudders to aid in design. A second study with 15 visualization designers examined four different variants of rudders: asking questions, stating conclusions, composing a narrative, and writing titles. Overall, participants had a positive reaction; designers recognized the benefits of explicitly writing down components of the design and indicated that they would use this approach in future design work. More specifically, two approaches \u2013- writing questions and writing conclusions/takeaways \u2013- were seen as beneficial across the design process, while writing narratives showed promise mainly for the creation stage. Although concerns around potential bias during data exploration were raised, participants also discussed strategies to mitigate such concerns. This paper contributes to a deeper understanding of the interplay between language and visualization, and proposes a straightforward, lightweight addition to the visualization design process.","accessible_pdf":false,"authors":[{"affiliations":["UC Berkeley, Berkeley, United States"],"email":"chase_stokes@berkeley.edu","is_corresponding":true,"name":"Chase Stokes"},{"affiliations":["Self, Berkeley, United States"],"email":"clarahu@berkeley.edu","is_corresponding":false,"name":"Clara Hu"},{"affiliations":["UC Berkeley, Berkeley, United States"],"email":"hearst@berkeley.edu","is_corresponding":false,"name":"Marti Hearst"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chase Stokes"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1140","time_end":"","time_stamp":"","time_start":"","title":"It's a Good Idea to Put It Into Words: Writing 'Rudders' in the Initial Stages of Visualization Design","uid":"v-full-1140","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"To deploy machine learning (ML) models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many compression experiments, identifying subtle changes in model behavior, and negotiating complex accuracy-efficiency trade-offs. However, existing compression tools poorly support comparison, leading to tedious and, sometimes, incomplete analyses spread across disjoint tools. To support real-world comparative workflows, we develop an interactive visual system called Compress & Compare. Within a single interface, Compress & Compare surfaces promising compression strategies by visualizing provenance relationships between compressed models and reveals compression-induced behavior changes by comparing models' predictions, weights, and activations. We demonstrate how Compress & Compare supports common compression analysis tasks through two case studies\u2014debugging failed compression on generative language models and identifying compression-induced biases in image classification. We further evaluate Compress & Compare in a user study with eight compression experts, illustrating its potential to provide structure to compression workflows, help practitioners build intuition about compression, and encourage thorough analysis of compression\u2019s effect on model behavior. Through these evaluations, we identify compression-specific challenges that future visual analytics tools should consider and Compress & Compare visualizations that may generalize to broader model comparison tasks.","accessible_pdf":false,"authors":[{"affiliations":["Massachusetts Institute of Technology, Cambridge, United States"],"email":"aboggust@mit.edu","is_corresponding":true,"name":"Angie Boggust"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"vsivaram@andrew.cmu.edu","is_corresponding":false,"name":"Venkatesh Sivaraman"},{"affiliations":["Apple, Cambridge, United States"],"email":"yassogba@gmail.com","is_corresponding":false,"name":"Yannick Assogba"},{"affiliations":["Apple, Seattle, United States"],"email":"donghao@apple.com","is_corresponding":false,"name":"Donghao Ren"},{"affiliations":["Apple, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["Apple, Seattle, United States"],"email":"fred.hohman@gmail.com","is_corresponding":false,"name":"Fred Hohman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Angie Boggust"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1142","time_end":"","time_stamp":"","time_start":"","title":"Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments","uid":"v-full-1142","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Large Language Models (LLMs) like GPT-4 which support multimodal input (i.e., prompts containing images in addition to text) have immense potential to advance visualization research. However, many questions exist about the visual capabilities of such models, including how well they can read and interpret visually represented data. In our work, we address this question by evaluating the GPT-4 multimodal LLM using a suite of task sets meant to assess the model\u2019s visualization literacy. The task sets are based on existing work in the visualization community addressing both automated chart question answering and human visualization literacy across multiple settings. Our assessment finds that GPT-4 can perform tasks such as recognizing trends and extreme values, and also demonstrates some understanding of visualization design best-practices. By contrast, GPT-4 struggles with simple value retrieval when not provided with the original dataset, lacks the ability to reliably distinguish between colors in charts, and occasionally suffers from hallucination and inconsistency. We conclude by reflecting on the model\u2019s strengths and weaknesses as well as the potential utility of models like GPT-4 for future visualization research. We also release all code, stimuli, and results for the task sets at the following link: (REDACTED FOR REVIEW)","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"abendeck3@gatech.edu","is_corresponding":true,"name":"Alexander Bendeck"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"john.stasko@cc.gatech.edu","is_corresponding":false,"name":"John Stasko"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alexander Bendeck"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1147","time_end":"","time_stamp":"","time_start":"","title":"An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks","uid":"v-full-1147","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Composite visualization represents a widely embraced design that combines multiple visual representations to create an integrated view. However, the traditional approach of creating composite visualizations in immersive environments typically occurs asynchronously outside of the immersive space and is carried out by experienced experts. In this work, we take the first step to empower users to participate in the creation of composite visualization within immersive environments through embodied interactions. This could provide a flexible and fluid experience for data exploration and facilitate a deep understanding of the relationship between data visualizations. We begin with forming a design space of embodied interactions to create various types of composite visualizations with the consideration of data relationships. Drawing inspiration from people's natural experience of manipulating physical objects, we design interactions to directly assemble composite visualizations in immersive environments. Building upon the design space, we present a series of case studies showcasing the interactive method to create different kinds of composite visualizations in Virtual Reality (VR). Subsequently, we conduct a user study to evaluate the usability of the derived interaction techniques and user experience of embodiedly creating composite visualizations. We find that empowering users to participate in composite visualizations through embodied interactions enables them to flexibly leverage different visualization representations for understanding and communicating the relationships between different views, which underscores the potential for a set of application scenarios in the future.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China","The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"qzhual@connect.ust.hk","is_corresponding":true,"name":"Qian Zhu"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States","Georgia Institute of Technology, Atlanta, United States"],"email":"luttul@umich.edu","is_corresponding":false,"name":"Tao Lu"},{"affiliations":["Adobe Research, San Jose, United States","Adobe Research, San Jose, United States"],"email":"sguo@adobe.com","is_corresponding":false,"name":"Shunan Guo"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong","Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"mxj@cse.ust.hk","is_corresponding":false,"name":"Xiaojuan Ma"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States","Georgia Institute of Technology, Atlanta, United States"],"email":"yalongyang@hotmail.com","is_corresponding":false,"name":"Yalong Yang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qian Zhu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1150","time_end":"","time_stamp":"","time_start":"","title":"CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments","uid":"v-full-1150","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Points of interest on a map such as restaurants, hotels, or subway stations, give rise to categorical point data: data that have a fixed location and one or more categorical attributes. Consequently, recent years have seen various set visualization approaches that visually connect points of the same category to support users in understanding the spatial distribution of categories. Existing methods use complex and often highly irregular shapes to connect points of the same category, leading to high cognitive load for the user. In this paper we introduce SimpleSets that use simple shapes to enclose categorical point patterns and provide a low-complexity overview of the data distribution. We give formal definitions of point patterns that correspond to simple shapes and describe an algorithm that partitions categorical points into few such patterns. Our second contribution is a rendering algorithm that transforms a given partition into a clean set of shapes resulting in an aesthetically pleasing set visualization. Our algorithm pays particular attention to resolving intersections between nearby shapes in a consistent manner. We compare SimpleSets to the state-of-the-art set visualizations using standard datasets from the literature. SimpleSets are designed to visualize disjoint categories, however, we discuss avenues to extend our technique to overlapping set systems.","accessible_pdf":false,"authors":[{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"s.w.v.d.broek@tue.nl","is_corresponding":true,"name":"Steven van den Broek"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"w.meulemans@tue.nl","is_corresponding":false,"name":"Wouter Meulemans"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"b.speckmann@tue.nl","is_corresponding":false,"name":"Bettina Speckmann"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Steven van den Broek"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1153","time_end":"","time_stamp":"","time_start":"","title":"SimpleSets: Capturing Categorical Point Patterns with Simple Shapes","uid":"v-full-1153","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Interactive visualizations are powerful tools for Exploratory Data Analysis (EDA), but how do they affect the observations analysts make about their data? We conducted a qualitative experiment with 13 professional data scientists analyzing two datasets within Jupyter notebooks, collecting a rich dataset of interaction traces and think-aloud utterances. By qualitatively analyzing participant verbalizations, we introduce the concept of \"observation-analysis states.\" These states capture both the dataset characteristics a participant focuses on and the insights they express. Our definition reveals that interactive visualizations on average lead to earlier and more complex insights about relationships between dataset attributes compared to static visualizations. Moreover, this process identified new measures for studying representation use in notebooks such as hover time, revisiting rate and representational diversity. In particular, revisiting rates revealed behavior where analysts revisit particular representations throughout the time course of an analysis, serving more as navigational aids through an EDA than as strict hypothesis answering tools. We show how these measures helped identify other patterns of analysis behavior, such as the \"80-20 rule\", where a small subset of representations drove the majority of observations. Based on these findings, we offer design guidelines for interactive exploratory analysis tooling and reflect on future directions for studying the role that visualizations play in EDA.","accessible_pdf":false,"authors":[{"affiliations":["MIT, Cambridge, United States"],"email":"dwootton@mit.edu","is_corresponding":true,"name":"Dylan Wootton"},{"affiliations":["MIT, Cambridge, United States"],"email":"amyraefoxphd@gmail.com","is_corresponding":false,"name":"Amy Rae Fox"},{"affiliations":["University of Colorado Boulder, Boulder, United States"],"email":"evan.peck@colorado.edu","is_corresponding":false,"name":"Evan Peck"},{"affiliations":["MIT, Cambridge, United States"],"email":"arvindsatya@mit.edu","is_corresponding":false,"name":"Arvind Satyanarayan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dylan Wootton"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1155","time_end":"","time_stamp":"","time_start":"","title":"Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.","uid":"v-full-1155","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful tools for solving complex optimization problems characterized by multiple, often conflicting, objectives. While advancements have been made in computational efficiency as well as diversity and convergence of solutions, a critical challenge persists: the internal evolutionary mechanisms are opaque to human users. Drawing upon the successes of explainable AI in explaining complex algorithms and models, we argue that the need to understand the underlying evolutionary operators and population dynamics in MOEAs aligns well with a visual analytics paradigm. This paper introduces ParetoTracker, a visual analytics framework designed to support the comprehension and inspection of population dynamics in the evolutionary processes of MOEAs. Informed by preliminary literature review and expert interviews, the framework establishes a multi-level analysis scheme, which caters to user engagement and exploration ranging from examining overall trends in performance metrics to conducting fine-grained inspections of evolutionary operations. In contrast to conventional practices that require manual plotting of solutions for each generation, ParetoTracker facilitates the examination of temporal trends and dynamics across consecutive generations in an integrated visual interface. The effectiveness of the framework is demonstrated through case studies and expert interviews focused on widely adopted benchmark optimization problems.","accessible_pdf":false,"authors":[{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"zhangzr32021@mail.sustech.edu.cn","is_corresponding":false,"name":"Zherui Zhang"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"yangf2020@mail.sustech.edu.cn","is_corresponding":false,"name":"Fan Yang"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"ranchengcn@gmail.com","is_corresponding":false,"name":"Ran Cheng"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"mayx@sustech.edu.cn","is_corresponding":true,"name":"Yuxin Ma"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuxin Ma"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1179","time_end":"","time_stamp":"","time_start":"","title":"ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics","uid":"v-full-1179","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who are unfamiliar with these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn unfamiliar network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then mines the underlying data patterns, and eventually explains both visual and data patterns present in the viewer\u2019s selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to only textual and only visual (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology.","accessible_pdf":false,"authors":[{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":true,"name":"Xinhuan Shu"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"alexis.pister@hotmail.com","is_corresponding":false,"name":"Alexis Pister"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"tangjunxiu@zju.edu.cn","is_corresponding":false,"name":"Junxiu Tang"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":false,"name":"Fanny Chevalier"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xinhuan Shu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1185","time_end":"","time_stamp":"","time_start":"","title":"Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations","uid":"v-full-1185","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Emerging multimodal large language models (MLLMs) exhibit great potential for chart question answering (CQA). Recent efforts primarily focus on scaling up training datasets (\\ie, charts, data tables, and question-answer (QA) pairs) through data collection and synthesis. However, our empirical study on existing MLLMs and CQA datasets reveals notable gaps. First, current data collection and synthesis focus on data volume and lack consideration of fine-grained visual encodings and QA tasks, resulting in unbalanced data distribution divergent from practical CQA scenarios. Second, existing work follows the training recipe of the base MLLMs initially designed for natural images, under-exploring the adaptation to unique chart characteristics, such as rich text elements. To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. Specifically, we propose a novel data engine to effectively filter diverse and high-quality data from existing datasets and subsequently refine and augment the data using LLM-based generation techniques to better align with practical QA tasks and visual encodings. Then, to facilitate the adaptation to chart characteristics, we utilize the enriched data to train an MLLM by unfreezing the vision encoder and incorporating a mixture-of-resolution adaptation strategy for enhanced fine-grained recognition. Experimental results validate the effectiveness of our approach. Even with fewer training examples, our model consistently outperforms state-of-the-art CQA models on established benchmarks. We also contribute a dataset split as a benchmark for future research.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"xingchen.zeng@outlook.com","is_corresponding":true,"name":"Xingchen Zeng"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"hlin386@connect.hkust-gz.edu.cn","is_corresponding":false,"name":"Haichuan Lin"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"yyebd@connect.ust.hk","is_corresponding":false,"name":"Yilin Ye"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China","The Hong Kong University of Science and Technology, Hong Kong SAR, China"],"email":"weizeng@hkust-gz.edu.cn","is_corresponding":false,"name":"Wei Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xingchen Zeng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1193","time_end":"","time_stamp":"","time_start":"","title":"Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning","uid":"v-full-1193","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and visual data analysis tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with several variables including personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual's personality traits.","accessible_pdf":false,"authors":[{"affiliations":["Emory University, Atlanta, United States"],"email":"mengyu.chen@emory.edu","is_corresponding":true,"name":"Mengyu Chen"},{"affiliations":["Emory University, Atlanta, United States"],"email":"yijun.liu2@emory.edu","is_corresponding":false,"name":"Yijun Liu"},{"affiliations":["Emory University, Atlanta, United States"],"email":"emily.wall@emory.edu","is_corresponding":false,"name":"Emily Wall"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mengyu Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1202","time_end":"","time_stamp":"","time_start":"","title":"Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis","uid":"v-full-1202","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present ProvenanceWidgets, a Javascript library of UI control elements such as radio buttons, checkboxes, and dropdowns to track and dynamically overlay a user's analytic provenance. These in situ overlays not only save screen space but also minimize the amount of time and effort needed to access the same information from elsewhere in the UI. In this paper, we discuss how we design modular UI control elements to track how often and how recently a user interacts with them and design visual overlays showing an aggregated summary as well as a detailed temporal history. We demonstrate the capability of ProvenanceWidgets by recreating three prior widget libraries: (1) Scented Widgets, (2) Phosphor objects, and (3) Dynamic Query Widgets. We also evaluated its expressiveness and conducted case studies with visualization developers to evaluate its effectiveness. We find that ProvenanceWidgets enables developers to implement custom provenance-tracking applications effectively. ProvenanceWidgets is available as open-source software at https://github.com/ProvenanceWidgets to help application developers build custom provenance-based systems.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"arpitnarechania@gatech.edu","is_corresponding":true,"name":"Arpit Narechania"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"kaustubhodak1@gmail.com","is_corresponding":false,"name":"Kaustubh Odak"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"melassady@ai.ethz.ch","is_corresponding":false,"name":"Mennatallah El-Assady"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"endert@gatech.edu","is_corresponding":false,"name":"Alex Endert"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arpit Narechania"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1204","time_end":"","time_stamp":"","time_start":"","title":"ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance","uid":"v-full-1204","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Graphs are often used to model relationships between entities. The identification and visualization of clusters in graphs enable insight discovery in many application areas, such as life sciences and social sciences. Force-directed graph layout algorithms promote the visual saliency of clusters, as they generally bring adjacent nodes closer together, and push non-adjacent nodes apart. In this work, we study the impact of node ordering on the visual saliency of clusters in orderable node-link diagrams, namely radial diagrams, arc diagrams and symmetric arc diagrams. Through a crowdsourced controlled experiment, we show that users can count clusters consistently more accurately, and to a large extent faster, with orderable node-link diagrams than with three state-of-the art force-directed layout algorithms, i.e., `Linlog', `Backbone' and, `sfdp'. The measured advantage is greater in case of low cluster separability and/or low compactness. A free copy of this paper and all supplemental materials are available at https://osf.io/kc3dg/?view_only=892f7b96752e40a6baefb2e50e866f9d","accessible_pdf":false,"authors":[{"affiliations":["Luxembourg Institute of Science and Technology, Esch-sur-Alzette, Luxembourg"],"email":"nora.alnaami@list.lu","is_corresponding":false,"name":"Nora Al-Naami"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"nicolas.medoc@list.lu","is_corresponding":false,"name":"Nicolas Medoc"},{"affiliations":["Uppsala University, Uppsala, Sweden"],"email":"matteo.magnani@it.uu.se","is_corresponding":false,"name":"Matteo Magnani"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"mohammad.ghoniem@list.lu","is_corresponding":true,"name":"Mohammad Ghoniem"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mohammad Ghoniem"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1214","time_end":"","time_stamp":"","time_start":"","title":"Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts","uid":"v-full-1214","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Placing text labels is a common way to explain key elements in a given scene. Given a graphic input and original label information, how to place labels to meet both geometric and aesthetic requirements is an open challenging problem. Geometry-wise, traditional rule-driven solutions struggle to capture the complex interactions between labels, let alone consider graphical/appearance content. In terms of aesthetics, training/evaluation data ideally require nontrivial effort and expertise in design, thus resulting in a lack of decent datasets for learning-based methods. To address the above challenges, we formulate the task with a graph representation, where nodes correspond to labels and edges to the between-label interactions, and treat label placement as a node position prediction problem. With this novel representation, we design a Label Placement Graph Transformer (LPGT) to predict label positions. Specifically, edge-level attention, conditioned on node representations, is introduced to reveal potential relationships between labels. To integrate graphic/image information, we design a feature aligning strategy that extracts deep features for nodes and edges efficiently. Next, to address the dataset issue, we collect commercial illustrations with professionally designed label layouts from household appliance manuals, and annotate them with useful information to create a novel dataset named the Appliance Manual Illustration Labels (AMIL) dataset. In the thorough evaluation on AMIL, our LPGT solution achieves promising label placement performance compared with popular baselines.","accessible_pdf":false,"authors":[{"affiliations":["Southwest University, Beibei, China"],"email":"qujingwei@swu.edu.cn","is_corresponding":true,"name":"Jingwei Qu"},{"affiliations":["Southwest University, Chongqing, China"],"email":"z2211973606@email.swu.edu.cn","is_corresponding":false,"name":"Pingshun Zhang"},{"affiliations":["Southwest University, Beibei, China"],"email":"enyuche@gmail.com","is_corresponding":false,"name":"Enyu Che"},{"affiliations":["COLLEGE OF COMPUTER AND INFORMATION SCIENCE, SOUTHWEST UNIVERSITY SCHOOL OF SOFTWAREC, Chongqin, China"],"email":"out1147205215@outlook.com","is_corresponding":false,"name":"Yinan Chen"},{"affiliations":["Stony Brook University, New York, United States"],"email":"hling@cs.stonybrook.edu","is_corresponding":false,"name":"Haibin Ling"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jingwei Qu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1218","time_end":"","time_stamp":"","time_start":"","title":"Graph Transformer for Label Placement","uid":"v-full-1218","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"How do cancer cells grow, divide, proliferate and die? How do drugs influence these processes? These are difficult questions that we can attempt to answer with a combination of time-series microscopy experiments, classification algorithms, and data visualization. However, collecting this type of data and applying algorithms to segment and track cells and construct lineages of proliferation is error-prone; and identifying the errors can be challenging since it often requires cross-checking multiple data types. Similarly, analyzing and communicating the results necessitates synthesizing different data types into a single narrative. State-of-the-art visualization methods for such data use independent line charts, tree diagrams, and images in separate views. However, this spatial separation requires the viewer of these charts to combine the relevant pieces of data in memory. To simplify this challenging task, we describe design principles for weaving cell images, time-series data, and tree data into a cohesive visualization. Our design principles are based on choosing a primary data type that drives the layout and integrates the other data types into that layout. We then introduce Aardvark, a system that uses these principles to implement novel visualization techniques. Based on Aardvark, we demonstrate the utility of each of these approaches for discovery, communication, and data debugging in a series of case studies.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"devin@sci.utah.edu","is_corresponding":true,"name":"Devin Lange"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"robert.judson-torres@hci.utah.edu","is_corresponding":false,"name":"Robert L Judson-Torres"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"tzangle@chemeng.utah.edu","is_corresponding":false,"name":"Thomas A Zangle"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"alex@sci.utah.edu","is_corresponding":false,"name":"Alexander Lex"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Devin Lange"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1232","time_end":"","time_stamp":"","time_start":"","title":"Aardvark: Composite Visualizations of Trees, Time-Series, and Images","uid":"v-full-1232","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks that lead to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook history, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only improves the reproducibility of notebooks, but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate utility and potential impact of our approach in two use cases and feedback from notebook users from a range of backgrounds.","accessible_pdf":false,"authors":[{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"klaus@eckelt.info","is_corresponding":true,"name":"Klaus Eckelt"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"kirangadhave2@gmail.com","is_corresponding":false,"name":"Kiran Gadhave"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"alex@sci.utah.edu","is_corresponding":false,"name":"Alexander Lex"},{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"marc.streit@jku.at","is_corresponding":false,"name":"Marc Streit"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Klaus Eckelt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1251","time_end":"","time_stamp":"","time_start":"","title":"Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks","uid":"v-full-1251","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"People commonly utilize visualizations not only to examine a given dataset, but also to draw generalizable conclusions about the underlying models or phenomena. Previous research has compared human visual inference to that of an optimal Bayesian agent, with deviations from rational analysis viewed as problematic. However, human reliance on non-normative heuristics may prove advantageous in certain circumstances. We investigate scenarios where human intuition might surpass idealized statistical rationality. In two experiments, we examine individuals' accuracy in characterizing the parameters of known data-generating models from bivariate visualizations. Our findings indicate that, although participants generally exhibited lower accuracy compared to statistical models, they frequently outperformed Bayesian agents, particularly when faced with extreme samples. Participants appeared to rely on their internal models to filter out noisy visualizations, thus improving their resilience against spurious data. However, participants displayed overconfidence and struggled with uncertainty estimation. They also exhibited higher variance than statistical machines. Our findings suggest that analyst gut reactions to visualizations may provide an advantage, even when departing from rationality. These results carry implications for designing visual analytics tools, offering new perspectives on how to integrate statistical models and analyst intuition for improved inference and decision-making.","accessible_pdf":false,"authors":[{"affiliations":["Indiana University, Indianapolis, United States"],"email":"rkoonch@iu.edu","is_corresponding":true,"name":"Ratanond Koonchanok"},{"affiliations":["Argonne National Laboratory, Lemont, United States","University of Illinois Chicago, Chicago, United States"],"email":"papka@anl.gov","is_corresponding":false,"name":"Michael E. Papka"},{"affiliations":["Indiana University, Indianapolis, United States"],"email":"redak@iu.edu","is_corresponding":false,"name":"Khairi Reda"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ratanond Koonchanok"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1256","time_end":"","time_stamp":"","time_start":"","title":"Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations","uid":"v-full-1256","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Providing effective guidance for users has long been an important and challenging task for efficient exploratory visual analytics, especially when selecting variables for visualization in high-dimensional datasets. Correlation is the most widely applied metric for guidance in statistical and analytical tools, however a reliance on correlation may lead users towards false positives when interpreting causal relations in the data. In this work, inspired by prior insights on the benefits of counterfactual visualization in supporting visual causal inference, we propose a novel, simple, and efficient counterfactual guidance method to enhance causal inference performance in guided exploratory analytics based on insights and concerns gathered from expert interviews. Our technique aims to capitalize on the benefits of counterfactual approaches while reducing their complexity for users. We integrated counterfactual guidance into an exploratory visual analytics system, and using a synthetically generated ground-truth causal dataset, conducted a comparative user study and evaluated to what extent counterfactual guidance can help lead users to more precise visual causal inferences. The results suggest that counterfactual guidance improved visual causal inference performance, and also led to different exploratory behaviors compared to correlation-based guidance. Based on these findings, we offer future directions to incorporate and examine counterfactual guidance to better support exploratory visual analytics.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":true,"name":"Arran Zeyu Wang"},{"affiliations":["UNC-Chapel Hill, Chapel Hill, United States"],"email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arran Zeyu Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1258","time_end":"","time_stamp":"","time_start":"","title":"Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis","uid":"v-full-1258","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In various scientific and industrial domains, analyzing multivariate spatial data, i.e., vectors associated with spatial locations, is common practice. To analyze those datasets, analysts may turn to models such as Spatial Blind Source Separation (SBSS). Designed explicitly for spatial data analysis, SBSS finds latent components in the dataset and is superior to popular non-spatial models, like PCA. However, when analysts try different tuning parameter settings, the amount of latent components complicates analytical tasks. Based on our years-long collaboration with SBSS researchers, we propose a visualization approach to tackle this challenge. The main component is UnDRground Tubes (UT), a general-purpose idiom combining ideas from set visualization and multidimensional projections. We describe the UT visualization pipeline and integrate UT into an interactive multiple-view system. We demonstrate its effectiveness through interviews with SBSS experts, a qualitative evaluation with visualization experts, and computational experiments. SBSS experts were excited about our approach. They saw many benefits for their work and potential applications for geostatistical data analysis more generally. UT was also very well received by visualization experts. Our benchmarks show that UT projections and its heuristics are appropriate.","accessible_pdf":false,"authors":[{"affiliations":["TU Wien, Vienna, Austria"],"email":"nikolaus.piccolotto@tuwien.ac.at","is_corresponding":true,"name":"Nikolaus Piccolotto"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"mwallinger@ac.tuwien.ac.at","is_corresponding":false,"name":"Markus Wallinger"},{"affiliations":["Institute of Visual Computing and Human-Centered Technology, Vienna, Austria"],"email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"markus.boegl@tuwien.ac.at","is_corresponding":false,"name":"Markus B\u00f6gl"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nikolaus Piccolotto"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1272","time_end":"","time_stamp":"","time_start":"","title":"UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization","uid":"v-full-1272","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We developed and validated an instrument to measure the perceived readability in data visualization: PREVis. Researchers and practitioners can easily use this instrument as part of their evaluations to compare the perceived readability of different visual data representations. Our instrument can complement results from controlled experiments on user task performance or provide additional data during in-depth qualitative work such as design iterations when developing a new technique. Although readability is recognized as an essential quality of data visualizations, so far there has not been a unified definition of the construct in the context of visual representations. As a result, researchers often lack guidance for determining how to ask people to rate their perceived readability of a visualization. To address this issue, we engaged in a rigorous process to develop the first validated instrument targeted at the subjective readability of visual data representations. Our final instrument consists of 11 items across 4 dimensions: understandability, layout clarity, readability of data values, and readability of data patterns. We provide the questionnaire as a document with implementation guidelines on osf.io/9cg8j. Beyond this instrument, we contribute a discussion of how researchers have previously assessed visualization readability, and an analysis of the factors underlying perceived readability in visual data representations.","accessible_pdf":false,"authors":[{"affiliations":["LISN, Universit\u00e9 Paris Saclay, CNRS, Orsay, France","Aviz, Inria, Saclay, France"],"email":"acabouat@gmail.com","is_corresponding":true,"name":"Anne-Flore Cabouat"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tingying.he@inria.fr","is_corresponding":false,"name":"Tingying He"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anne-Flore Cabouat"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1275","time_end":"","time_stamp":"","time_start":"","title":"PREVis: Perceived Readability Evaluation for Visualizations","uid":"v-full-1275","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.","accessible_pdf":false,"authors":[{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":true,"name":"Tushar M. Athawale"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"wangz@ornl.gov","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kmorel@acm.org","is_corresponding":false,"name":"Kenneth Moreland"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"gongq@ornl.gov","is_corresponding":false,"name":"Qian Gong"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"klasky@ornl.gov","is_corresponding":false,"name":"Scott Klasky"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"paul.rosen@utah.edu","is_corresponding":false,"name":"Paul Rosen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tushar M. Athawale"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1277","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models","uid":"v-full-1277","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Participatory budgeting (PB) is a democratic approach to allocating municipal spending that has been adopted in many places in recent years, including in Chicago. Current PB voting resembles a ballot where residents are asked which municipal projects, such as school improvements and road repairs, to fund with a limited budget. In this work, we ask how interactive visualization can benefit PB by conducting a design probe-based interview study (N=13) with policy workers and academics with expertise in PB, urban planning, and civic HCI. Our probe explores how graphical elicitation of voter preferences and a dashboard of voting statistics can be incorporated into a realistic PB tool. Through qualitative analysis, we find that visualization creates opportunities for city government to set expectations about budget constraints while also granting their constituents greater freedom to articulate a wider range of preferences. However, using visualization to provide transparency about PB requires efforts to mitigate potential access barriers and mistrust. We call for more visualization professionals to help build civic capacity by working in and studying political systems.","accessible_pdf":false,"authors":[{"affiliations":["University of Chicago, Chicago, United States"],"email":"kalea@uchicago.edu","is_corresponding":true,"name":"Alex Kale"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"danni6@uchicago.edu","is_corresponding":false,"name":"Danni Liu"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"mariagabrielaa@uchicago.edu","is_corresponding":false,"name":"Maria Gabriela Ayala"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"hwschwab@uchicago.edu","is_corresponding":false,"name":"Harper Schwab"},{"affiliations":["University of Washington, Seattle, United States","University of Utah, Salt Lake City, United States"],"email":"mcnutt.andrew@gmail.com","is_corresponding":false,"name":"Andrew M McNutt"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alex Kale"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1281","time_end":"","time_stamp":"","time_start":"","title":"What Can Interactive Visualization do for Participatory Budgeting in Chicago?","uid":"v-full-1281","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data tables are one of the most common ways in which people encounter data. Although mostly built with text and numbers, data tables have a spatial layout and often exhibit visual elements meant to facilitate their reading. Surprisingly, there is an empirical knowledge gap on how people read and use tables and how different visual aids affect people's ability to use them. In this work, we seek to address this vacuum through a controlled study. We asked participants to repeatedly perform four different tasks with tables in four table representation conditions (plain tables, tables with zebra striping, tables with cell background color encoding cell value, and tables with background bar length in a cell encoding cell value). We analyzed completion time, error rate, gaze-tracking data, mouse movement and participant preferences. We found that visual encodings help for finding maximum values (especially color), but not as much as zebra striping helps in a complex task (comparison of proportional differences). We also characterize typical human behavior for the different tasks. These findings can inform the design of tables and research directions for improving presentation of data in tabular form.","accessible_pdf":false,"authors":[{"affiliations":["University of Victoria, Victoria, Canada"],"email":"yongfengji@uvic.ca","is_corresponding":false,"name":"YongFeng Ji"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"nacenta@gmail.com","is_corresponding":false,"name":"Miguel A Nacenta"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Charles Perin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1288","time_end":"","time_stamp":"","time_start":"","title":"The Effect of Visual Aids on Reading Numeric Data Tables","uid":"v-full-1288","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization linters are end-user facing evaluators that automatically identify potential chart issues. These spell-checker like systems offer a blend of interpretability and customization that is not found in other forms of automated assistance. However, existing linters do not model context and have primarily targeted users who do not need assistance, resulting in obvious---even annoying---advice. We investigate these issues within the domain of color palette design, which serves as a microcosm of visualization design concerns. We contribute a GUI-based color palette linter as a design probe that covers perception, accessibility, context, and other design criteria, and use it to explore visual explanations, integrated fixes, and user-defined linting rules. Through a formative interview study and theory-driven analysis, we find that linters can be meaningfully integrated into graphical contexts thereby addressing many of their core issues. We discuss implications for integrating linters into visualization tools, developing improved assertion languages, and supporting end-user tunable advice---all laying the groundwork for more effective visualization linters in any context.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States","University of Utah, Salt Lake City, United States"],"email":"mcnutt.andrew@gmail.com","is_corresponding":true,"name":"Andrew M McNutt"},{"affiliations":["University of Washington, Seattle, United States"],"email":"maureen.stone@gmail.com","is_corresponding":false,"name":"Maureen Stone"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Andrew M McNutt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1290","time_end":"","time_stamp":"","time_start":"","title":"Mixing Linters with GUIs: A Color Palette Design Probe","uid":"v-full-1290","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Emotion is an important factor to consider when designing visualizations as it can impact the amount of trust viewers place in a visualization, how well they can retrieve information and understand the underlying data, and how much they engage with or connect to a visualization. We conducted five crowdsourced experiments to quantify the effects of color, chart type, data trend, data variability and data density on emotion (measured through self-reported arousal and valence). Results from our experiments show that there are multiple design elements which influence the emotion induced by a visualization and, more surprisingly, that certain data characteristics influence the emotion of viewers even when the data has no meaning. In light of these findings, we offer guidelines on how to use color, scale, and chart type to counterbalance and emphasize the emotional impact of immutable data characteristics.","accessible_pdf":false,"authors":[{"affiliations":["University of Waterloo, Waterloo, Canada","University of Victoria, Victoria, Canada"],"email":"cartergblair@gmail.com","is_corresponding":false,"name":"Carter Blair"},{"affiliations":["University of Victoria, Victoira, Canada","Delft University of Technology, Delft, Netherlands"],"email":"xiyao.wang23@gmail.com","is_corresponding":false,"name":"Xiyao Wang"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Charles Perin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1291","time_end":"","time_stamp":"","time_start":"","title":"Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations","uid":"v-full-1291","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Annotations play a vital role in highlighting critical aspects of visualizations, aiding in data externalization and exploration, collaborative data analysis, and visual storytelling. However, despite their widespread use, we identified a lack of a design space for common practices for annotations. In this paper, we evaluated over 1,800 static annotated charts to understand how people annotate visualizations in practice. Through qualitative coding of these diverse real-world annotated charts, we explore three primary aspects of annotation usage patterns: analytic purposes for chart annotations (e.g., present, identify, summarize, or compare data features), mechanisms for chart annotations (e.g., types and combinations of annotations used, frequency of different annotation types across chart types, etc.), and the data source used to generate the annotations. We then synthesized our findings into a design space of annotations, highlighting key design choices for chart annotations. We presented three case studies illustrating our design space as a practical framework for chart annotations to enhance the communication of visualization insights.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States","University of Utah, Salt Lake City, United States"],"email":"dilshadur@sci.utah.edu","is_corresponding":true,"name":"Md Dilshadur Rahman"},{"affiliations":["University of Oklahoma, Norman, United States","University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":false,"name":"Ghulam Jilani Quadri"},{"affiliations":["University of South Florida , Tampa, United States","University of South Florida , Tampa, United States"],"email":"bdoppalapudi@usf.edu","is_corresponding":false,"name":"Bhavana Doppalapudi"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States","University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"},{"affiliations":["University of Utah, Salt Lake City, United States","University of Utah, Salt Lake City, United States"],"email":"paul.rosen@utah.edu","is_corresponding":false,"name":"Paul Rosen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Md Dilshadur Rahman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1295","time_end":"","time_stamp":"","time_start":"","title":"A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space","uid":"v-full-1295","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present the results of an exploratory study on how pairs interact with speech commands and touch gestures on a wall-sized display during a collaborative sensemaking task. Previous work has shown that speech commands, alone or in combination with other input modalities, can support visual data exploration by individuals. However, it is still unknown whether and how speech commands can be used in collaboration, and for what tasks. To answer these questions, we developed a functioning prototype that we used as a technology probe. We conducted an in-depth exploratory study with 20 participants (10 pairs) to analyze their interaction choices, the interplay between the input modalities, and their collaboration. While touch was the most used modality, we found that participants preferred speech commands for global operations, used them for distant interaction, and that speech interaction contributed to the awareness of the partner\u2019s actions. Furthermore, the likelihood of using speech commands during collaboration was related to the personality trait of agreeableness. Regarding collaboration styles, participants interacted with speech equally often whether they were in loosely or closely coupled collaboration. While the partners stood closer to each other during close collaboration, they did not walk away from their partner to use speech commands. From our findings, we derive and contribute a set of design considerations for collaborative and multimodal interactive data analysis systems.","accessible_pdf":false,"authors":[{"affiliations":["University of Bremen, Bremen, Germany","University of Bremen, Bremen, Germany"],"email":"molina@uni-bremen.de","is_corresponding":true,"name":"Gabriela Molina Le\u00f3n"},{"affiliations":["LISN, Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"anastasia.bezerianos@universite-paris-saclay.fr","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":["Inria, Palaiseau, France"],"email":"olivier.gladin@inria.fr","is_corresponding":false,"name":"Olivier Gladin"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gabriela Molina Le\u00f3n"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1302","time_end":"","time_stamp":"","time_start":"","title":"Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics","uid":"v-full-1302","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Building information modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, building energy modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building\u2019s energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identification, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and understanding throughout the conversion process. By evaluating user feedback, we could show that BEMTrace can solve domain-specific tasks.","accessible_pdf":false,"authors":[{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"walch@vrvis.at","is_corresponding":false,"name":"Andreas Walch"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"szabo@vrvis.at","is_corresponding":false,"name":"Attila Szabo"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"hs@vrvis.at","is_corresponding":false,"name":"Harald Steinlechner"},{"affiliations":["Independent Researcher, Vienna, Austria"],"email":"thomas@ortner.fyi","is_corresponding":false,"name":"Thomas Ortner"},{"affiliations":["Institute of Visual Computing "," Human-Centered Technology, Vienna, Austria"],"email":"groeller@cg.tuwien.ac.at","is_corresponding":false,"name":"Eduard Gr\u00f6ller"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"johanna.schmidt@vrvis.at","is_corresponding":true,"name":"Johanna Schmidt"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Johanna Schmidt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1307","time_end":"","time_stamp":"","time_start":"","title":"BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM","uid":"v-full-1307","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualizations play a critical role in validating and improving statistical models. However, the design space of model check visualizations is not well understood, making it difficult for authors to explore and specify effective graphical model checks. VMC defines a model check visualization using four components: (1) samples of distributions of checkable quantities generated from the model, including predictive distributions for new data and distributions of model parameters; (2) transformations on observed data to facilitate comparison; (3) visual representations of distributions; and (4) layouts to facilitate comparing model samples and observed data. We contribute an implementation of VMC as an R package. We validate VMC by reproducing a set of canonical model check examples, and show how using VMC to generate model checks reduces the edit distance between visualizations relative to existing visualization toolkits. The findings of an interview study with three expert modelers who used VMC highlight challenges and opportunities for encouraging exploration of correct, effective model check visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"ziyangguo1030@gmail.com","is_corresponding":true,"name":"Ziyang Guo"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"kalea@uchicago.edu","is_corresponding":false,"name":"Alex Kale"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"jhullman@northwestern.edu","is_corresponding":false,"name":"Jessica Hullman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ziyang Guo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1309","time_end":"","time_stamp":"","time_start":"","title":"VMC: A Grammar for Visualizing Statistical Model Checks","uid":"v-full-1309","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We apply an approach from cognitive linguistics by mapping Conceptual Metaphor Theory (CMT) to the visualization domain to address patterns of visual conceptual metaphors that are often used in science infographics. Metaphors play an essential part in visual communication and are frequently employed to explain complex concepts. However, their use is often based on intuition, rather than following a formal process. At present, we lack tools and language for understanding and describing metaphor use in visualization to the extent where taxonomy and grammar could guide the creation of visual components, e.g., infographics. Our classification of the visual conceptual mappings within scientific representations is based on the breakdown of visual components in existing scientific infographics. We demonstrate the development of this mapping through a detailed analysis of data collected from four domains (biomedicine, climate, space, and anthropology) that represent a diverse range of visual conceptual metaphors used in the visual communication of science. This work allows us to identify patterns of visual conceptual metaphor use within the domains, resolve ambiguities about why specific conceptual metaphors are used, and develop a better overall understanding of visual metaphor use in scientific infographics. Our analysis shows that ontological and orientational conceptual metaphors are the most widely applied to translate complex scientific concepts. To support our findings we developed a visual exploratory tool based on the collected database that places the individual infographics on a spatio-temporal scale and illustrates the breakdown of visual conceptual metaphors.","accessible_pdf":false,"authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"hana.pokojna@gmail.com","is_corresponding":true,"name":"Hana Pokojn\u00e1"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["University of Rostock, Rostock, Germany"],"email":"stefan.bruckner@gmail.com","is_corresponding":false,"name":"Stefan Bruckner"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"},{"affiliations":["University of Bergen, Bergen, Norway","Haukeland University Hospital, University of Bergen, Bergen, Norway"],"email":"laura.garrison@uib.no","is_corresponding":false,"name":"Laura Garrison"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hana Pokojn\u00e1"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1316","time_end":"","time_stamp":"","time_start":"","title":"The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling","uid":"v-full-1316","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models' analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments--from initial exploration to detailed analysis--we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates their applicability in addressing the pressing concern of misleading charts.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"yhload@cse.ust.hk","is_corresponding":true,"name":"Leo Yu-Ho Lo"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leo Yu-Ho Lo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1318","time_end":"","time_stamp":"","time_start":"","title":"How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations","uid":"v-full-1318","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Dynamic data visualizations can convey large amounts of information over time, such as using motion to depict changes in data values for multiple entities. Such dynamic displays put a demand on our visual processing capacities, yet our perception of motion is limited. When tracking multiple objects across space and time, humans can typically track up to four objects, and the capacity is even lower if we also need to remember the history of the objects\u2019 features. Several techniques have been shown to improve the processing of dynamic displays. Staging the animation to sequentially show steps in a transition and tracing object movement by displaying trajectory histories can increase processing by reducing the cognitive load. In this paper, We examine the effectiveness of staging and tracing in dynamic displays. We showed participants animated line charts depicting the movements of lines and asked them to identify the line with the highest mean and variance. We manipulated the animation to display the lines with or without staging, tracing and history, and compared the results to a static chart as a control. Results showed that tracing and staging are preferred by participants, and improve their performance in mean and variance tasks respectively. The preferred display time 3 times shorter when staging is used. Also, encoding animation speed with mean and variance in congruent tasks is associated with higher accuracy. These findings help inform real-world best practices for building dynamic displays that leverage the strength of humans' visual processing.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"shu343@gatech.edu","is_corresponding":true,"name":"Songwen Hu"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"ouxunjiang@u.northwestern.edu","is_corresponding":false,"name":"Ouxun Jiang"},{"affiliations":["Dolby Laboratories Inc., San Francisco, United States"],"email":"jcr@dolby.com","is_corresponding":false,"name":"Jeffrey Riedmiller"},{"affiliations":["Georgia Tech, Atlanta, United States","University of Massachusetts Amherst, Amherst, United States"],"email":"cxiong@gatech.edu","is_corresponding":false,"name":"Cindy Xiong Bearfield"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Songwen Hu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1325","time_end":"","time_stamp":"","time_start":"","title":"Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series","uid":"v-full-1325","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Evaluating the quality of text responses generated by large language models (LLMs) poses unique challenges compared to traditional machine learning. While automatic side-by-side evaluation has emerged as a promising approach, LLM developers face scalability and interpretability challenges in analyzing these evaluation results. In this paper, we present LLM Comparator, a novel visual analytics tool for interactively analyzing results from side-by-side evaluation of LLMs. The tool provides users with interactive workflows to understand when and why a model performs better or worse than a baseline model, and how the responses from two models differ qualitatively. We iteratively designed and developed the tool by closely working with researchers and engineers at a large technology company. Qualitative feedback from users highlights that the tool facilitates in-depth analysis of individual examples while enabling users to visually overview and flexibly slice data. This empowers users to identify undesirable patterns, formulate hypotheses about model behavior, and gain insights for model improvement.","accessible_pdf":false,"authors":[{"affiliations":["Google, Atlanta, United States"],"email":"minsuk.kahng@gmail.com","is_corresponding":true,"name":"Minsuk Kahng"},{"affiliations":["Google Research, Seattle, United States"],"email":"iftenney@google.com","is_corresponding":false,"name":"Ian Tenney"},{"affiliations":["Google Research, Cambridge, United States"],"email":"mahimap@google.com","is_corresponding":false,"name":"Mahima Pushkarna"},{"affiliations":["Google Research, Pittsburgh, United States"],"email":"lxieyang.cmu@gmail.com","is_corresponding":false,"name":"Michael Xieyang Liu"},{"affiliations":["Google Research, Cambridge, United States"],"email":"jwexler@google.com","is_corresponding":false,"name":"James Wexler"},{"affiliations":["Google, Cambridge, United States"],"email":"ereif@google.com","is_corresponding":false,"name":"Emily Reif"},{"affiliations":["Google Research, Mountain View, United States"],"email":"kallarackal@google.com","is_corresponding":false,"name":"Krystal Kallarackal"},{"affiliations":["Google Research, Seattle, United States"],"email":"minsuk.cs@gmail.com","is_corresponding":false,"name":"Minsuk Chang"},{"affiliations":["Google, Cambridge, United States"],"email":"michaelterry@google.com","is_corresponding":false,"name":"Michael Terry"},{"affiliations":["Google, Paris, France"],"email":"ldixon@google.com","is_corresponding":false,"name":"Lucas Dixon"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Minsuk Kahng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1326","time_end":"","time_stamp":"","time_start":"","title":"LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models","uid":"v-full-1326","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students' learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students' interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master's level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students' interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT's responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system's effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz's capacity to enhance educators' insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"zchendf@connect.ust.hk","is_corresponding":true,"name":"Zixin Chen"},{"affiliations":["The Hong Kong University of Science and Technology, Sai Kung, China"],"email":"csejiachenw@ust.hk","is_corresponding":false,"name":"Jiachen Wang"},{"affiliations":["Texas A","M University, College Station, United States"],"email":"xiameng9355@gmail.com","is_corresponding":false,"name":"Meng Xia"},{"affiliations":["The Hong Kong University of Science and Technology, Kowloon, Hong Kong"],"email":"kshigyo@connect.ust.hk","is_corresponding":false,"name":"Kento Shigyo"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"dliuak@connect.ust.hk","is_corresponding":false,"name":"Dingdong Liu"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"rzhangab@connect.ust.hk","is_corresponding":false,"name":"Rong Zhang"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zixin Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1329","time_end":"","time_stamp":"","time_start":"","title":"StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions","uid":"v-full-1329","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Translating natural language to visualization (NL2VIS) has shown great promise for visual data analysis, but it remains a challenging task that requires multiple low-level implementations, such as natural language processing and visualization design. Recent advancements in pre-trained large language models (LLMs) are opening new avenues for generating visualizations from natural language. However, the lack of a comprehensive and reliable benchmark hinders our understanding of LLMs\u2019 capabilities in visualization generation. In this paper, we address this gap by proposing a new NL2VIS benchmark called VisEval. Firstly, we introduce a high-quality and large-scale dataset. This dataset includes 2,524 representative queries covering 146 databases, paired with accurately labeled ground truths. Secondly, we advocate for a comprehensive automated evaluation methodology covering multiple dimensions, including validity, legality, and readability. By systematically scanning for potential issues with a number of heterogeneous checkers, VisEval provides reliable and trustworthy evaluation outcomes. We run VisEval on a series of state-of-the-art LLMs. Our evaluation reveals prevalent challenges and delivers essential insights for future advancements.","accessible_pdf":false,"authors":[{"affiliations":["Microsoft Research, Shanghai, China"],"email":"christy05.chen@gmail.com","is_corresponding":true,"name":"Nan Chen"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"scottyugochang@gmail.com","is_corresponding":false,"name":"Yuge Zhang"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"jiahangxu@microsoft.com","is_corresponding":false,"name":"Jiahang Xu"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"rk.ren@outlook.com","is_corresponding":false,"name":"Kan Ren"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"yuqyang@microsoft.com","is_corresponding":false,"name":"Yuqing Yang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nan Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1332","time_end":"","time_stamp":"","time_start":"","title":"VisEval: A Benchmark for Data Visualization in the Era of Large Language Models","uid":"v-full-1332","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data videos increasingly becoming a popular data storytelling form represented by visual and audio integration. In recent years, more and more researchers have explored many narrative structures for effective and attractive data storytelling. Meanwhile, the Hero's Journey provides a classic narrative framework specific to the Hero's story that has been adopted by various mediums. There are continuous discussions about applying Hero's Journey to data stories. However, so far, little systematic and practical guidance on how to create a data video for a specific story type like the Hero's Journey, as well as how to manipulate its sound and visual designs simultaneously. To fulfill this gap, we first identified 48 data videos aligned with the Hero's Journey as the common storytelling from 109 high-quality data videos. Then, we examined how existing practices apply Hero's Journey for creating data videos. We coded the 48 data videos in terms of the narrative stages, sound design, and visual design according to the Hero's Journey structure. Based on our findings, we proposed a design space to provide practical guidance on the narrative, visual, and sound custom design for different narrative segments of the hero's journey (i.e., Departure, Initiation, Return) through data video creation. To validate our proposed design space, we conducted a user study where 20 participants were invited to design data videos with and without our design space guidance, which was evaluated by two experts. Results show that our design space provides useful and practical guidance for data storytellers effectively creating data videos with the Hero's Journey.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Guangzhou, China"],"email":"zwei302@connect.hkust-gz.edu.cn","is_corresponding":true,"name":"Zheng Wei"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"xxubq@connect.ust.hk","is_corresponding":false,"name":"Xian Xu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zheng Wei"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1333","time_end":"","time_stamp":"","time_start":"","title":"Telling Data Stories with the Hero\u2019s Journey: Design Guidance for Creating Data Videos","uid":"v-full-1333","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Genomics experts rely on visualization to extract and share insights from complex and large-scale datasets. Beyond off-the-shelf tools for data exploration, there is an increasing need for platforms that aid experts in authoring customized visualizations for both exploration and communication of insights. A variety of interactive techniques have been proposed for authoring data visualizations, such as template editing, shelf configuration, natural language input, and code editors. However, it remains unclear how genomics experts create visualizations and which techniques best support their visualization tasks and needs. To address this gap, we conducted two user studies with genomics researchers: (1) semi-structured interviews (n=20) to identify the tasks, user contexts, and current visualization authoring techniques and (2) an exploratory study (n=13) using visual probes to elicit users\u2019 intents and desired techniques when creating visualizations. Our contributions include (1) a characterization of how visualization authoring is currently utilized in genomics visualization, identifying limitations and benefits in light of common criteria for authoring tools, and (2) generalizable and actionable design implications for genomics visualization authoring tools based on our findings on task- and user-specific usefulness of authoring techniques.","accessible_pdf":false,"authors":[{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.v.d.brandt@tue.nl","is_corresponding":true,"name":"Astrid van den Brandt"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"sehi_lyi@hms.harvard.edu","is_corresponding":false,"name":"Sehi L'Yi"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"huyen_nguyen@hms.harvard.edu","is_corresponding":false,"name":"Huyen N. Nguyen"},{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.vilanova@tue.nl","is_corresponding":false,"name":"Anna Vilanova"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Astrid van den Brandt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1342","time_end":"","time_stamp":"","time_start":"","title":"Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks","uid":"v-full-1342","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"As basketball\u2019s popularity surges, fans often find themselves confused and overwhelmed by the rapid game pace and complexity. Basketball tactics, involving a complex series of actions, require substantial knowledge to be fully understood. This complexity leads to a need for additional information and explanation, which can distract fans from the game. To tackle these challenges, we present Sportify, a Visual Question Answering system that integrates narratives and embedded visualization for demystifying basketball tactical questions, aiding fans in understanding various game aspects. We propose three novel action visualizations (i.e., Pass, Cut, and Screen) to demonstrate critical action sequences. To explain the reasoning and logic behind players\u2019 actions, we leverage a large-language model (LLM) to generate narratives. We adopt a storytelling approach for complex scenarios from both first and third-person perspectives, integrating action visualizations. We evaluated Sportify with basketball fans to investigate its impact on understanding of tactics, and how different personal perspectives of narratives impact the understanding of complex tactic with action visualizations. Our evaluation with basketball fans demonstrates Sportify\u2019s capability to deepen tactical insights and amplify the viewing experience. Furthermore, third-person narration assists people in getting in-depth game explanations while first-person narration enhances fans\u2019 game engagement.","accessible_pdf":false,"authors":[{"affiliations":["Harvard University, Allston, United States"],"email":"chungyi347@gmail.com","is_corresponding":true,"name":"Chunggi Lee"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"mlin@g.harvard.edu","is_corresponding":false,"name":"Tica Lin"},{"affiliations":["University of Minnesota-Twin Cities, Minneapolis, United States"],"email":"ztchen@umn.edu","is_corresponding":false,"name":"Chen Zhu-Tian"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"pfister@seas.harvard.edu","is_corresponding":false,"name":"Hanspeter Pfister"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chunggi Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1351","time_end":"","time_stamp":"","time_start":"","title":"Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video","uid":"v-full-1351","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data visualization aids in making data analysis more intuitive and in-depth, with widespread applications in fields such as biology, finance, and medicine. For massive and continuously growing streaming time series data, these data are typically visualized in the form of line charts, but the data transmission puts significant pressure on the network, leading to visualization lag or even fail to render completely. This paper proposes a universal sampling algorithm FPCS, which retains feature points from continuously received streaming time series data, compensates for the frequent fluctuating feature points, and aims to achieve efficient visualization. This algorithm bridges the gap in sampling for streaming time series data. The algorithm has several advantages: (1) It optimizes the sampling results by compensating for fewer feature points, retaining the visualization features of the original data very well, ensuring high-quality sampled data; (2) The execution time is the shortest compared to similar existing algorithms; (3) It has an almost negligible space overhead; (4) The data sampling process does not depend on the overall data; (5) This algorithm can be applied to infinite streaming data and finite static data.","accessible_pdf":false,"authors":[{"affiliations":["China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China"],"email":"3271961659@qq.com","is_corresponding":true,"name":"Hongyan Li"},{"affiliations":["China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China"],"email":"ustcboy@outlook.com","is_corresponding":false,"name":"Bo Yang"},{"affiliations":["China Nanhu Academy of Electronics and Information Technology, Jiaxing, China"],"email":"caiyansong@cnaeit.com","is_corresponding":false,"name":"Yansong Chua"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hongyan Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1363","time_end":"","time_stamp":"","time_start":"","title":"FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data","uid":"v-full-1363","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Synthetic Lethal (SL) relationships, although rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there remains a persistent need among domain experts for interpretive paths and mechanism explorations that better harmonize with domain-specific knowledge, particularly due to the significant costs involved in experimentation. To address this gap, we propose an iterative Human-AI collaborative framework comprising two key components: 1)Human-Engaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2)Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids domain experts in organizing and comparing prediction results and interpretive paths across different granularities, thereby uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, thereby enhancing expert involvement and intervention to build trust. This framework, facilitated by SLInterpreter, ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. Subsequently, we evaluate the efficacy of the framework through a case study and expert interviews.","accessible_pdf":false,"authors":[{"affiliations":["Shanghaitech University, Shanghai, China"],"email":"jianghr2023@shanghaitech.edu.cn","is_corresponding":true,"name":"Haoran Jiang"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"shishh2023@shanghaitech.edu.cn","is_corresponding":false,"name":"Shaohan Shi"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"zhangshh2@shanghaitech.edu.cn","is_corresponding":false,"name":"Shuhao Zhang"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"zhengjie@shanghaitech.edu.cn","is_corresponding":false,"name":"Jie Zheng"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haoran Jiang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1368","time_end":"","time_stamp":"","time_start":"","title":"SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction","uid":"v-full-1368","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In volume visualization, visualization synthesis has attracted much attention due to its ability to generate novel visualizations without following the conventional rendering pipeline. However, existing solutions based on generative adversarial networks often require many training images and take significant training time. Still, issues such as low quality, consistency, and flexibility persist. This paper introduces StyleRF-VolVis, an innovative style transfer framework for expressive volume visualization (VolVis) via neural radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its ability to accurately separate the underlying scene geometry (i.e., content) and color appearance (i.e., style), conveniently modify color, opacity, and lighting of the original rendering while maintaining visual content consistency across the views, and effectively transfer arbitrary styles from reference images to the reconstructed 3D scene. To achieve these, we design a base NeRF model for scene geometry extraction, a palette color network to classify regions of the radiance field for photorealistic editing, and an unrestricted color network to lift the color palette constraint via knowledge distillation for non-photorealistic editing. We demonstrate the superior quality, consistency, and flexibility of StyleRF-VolVis by experimenting with various volume rendering scenes and reference images and comparing StyleRF-VolVis against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF and SNeRF) style rendering solutions.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"ktang2@nd.edu","is_corresponding":true,"name":"Kaiyuan Tang"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kaiyuan Tang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1391","time_end":"","time_stamp":"","time_start":"","title":"StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization","uid":"v-full-1391","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper discusses challenges and design strategies in responsive design for thematic maps in information visualization. Thematic maps pose a number of unique challenges for responsiveness, such as inflexible aspect ratios that do not easily adapt to varying screen dimensions, or densely clustered visual elements in urban areas becoming illegible at smaller scales. However, design guidance on how to best address these issues is currently lacking. We conducted design sessions with eight professional designers and developers of web-based thematic maps for information visualization. Participants were asked to redesign a given map for various screen sizes and aspect ratios and to describe their reasoning for when and how they adapted the design. We report general observations of practitioners\u2019 motivations, decision-making processes, and personal design frameworks. We then derive seven challenges commonly encountered in responsive map design, and 17 strategies to address them, such as repositioning elements, segmenting the map, or using alternative visualizations. We compile these challenges and strategies into an illustrated cheat sheet targeted at anyone designing or learning to design responsive maps. The cheat sheet is available online: https://responsive-vis.github.io/map-cheat-sheet.","accessible_pdf":false,"authors":[{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"sarah.schoettler@ed.ac.uk","is_corresponding":true,"name":"Sarah Sch\u00f6ttler"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"uhinrich@ed.ac.uk","is_corresponding":false,"name":"Uta Hinrichs"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sarah Sch\u00f6ttler"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1393","time_end":"","time_stamp":"","time_start":"","title":"Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts","uid":"v-full-1393","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents discursive patinas, a technique to visualize discussions onto data visualizations, inspired by how people leave traces in the physical world. While data visualizations are widely discussed in online communities and social media, comments tend to be displayed separately from the visualization. We lack ways to relate these discussions to the content of the visualization, e.g., to situate comments, explain visual patterns, or question assumptions. In our visualization annotation interface, users can designate areas within the visualization to, e.g., highlight specific visual marks (anchors), attach textual comments, and add category labels, likes, and replies. By coloring and styling these designated areas, a meta visualization emerges, showing what and where people comment and annotate. These patinas show regions of heavy discussions, recent commenting activity, and the distribution of questions, suggestions, or personal stories. To study how people use anchors to discuss visualizations and understand if and how information in patinas influence people's understanding of the discussion, we ran workshops with 90 participants including students, domain experts, and visualization researchers. Our results show that discursive patinas improve the ability to navigate discussions and guide people to comments that help understand, contextualize, or scrutinize the visualization. We discuss the potential of the technique to support discursive engagements, including critical readings of visualizations, design feedback, and feminist approaches to data visualization.","accessible_pdf":false,"authors":[{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom","Potsdam University of Applied Sciences, Potsdam, Germany"],"email":"tobias.kauer@fh-potsdam.de","is_corresponding":true,"name":"Tobias Kauer"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"derya.akbaba@liu.se","is_corresponding":false,"name":"Derya Akbaba"},{"affiliations":["University of Applied Sciences Potsdam, Potsdam, Germany"],"email":"doerk@fh-potsdam.de","is_corresponding":false,"name":"Marian D\u00f6rk"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias Kauer"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1394","time_end":"","time_stamp":"","time_start":"","title":"Discursive Patinas: Anchoring Discussions in Data Visualizations","uid":"v-full-1394","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Onboarding a user to a visualization dashboard entails explaining its various components, including the chart types used, the data loaded, and the interactions provided. Authoring such an onboarding experience is time-consuming and requires significant knowledge, and little guidance exists on how best to do this. End-users being onboarded to a new dashboard can be either confused and overwhelmed, or disinterested and disengaged, depending on the user\u2019s expertise. We propose interactive dashboard tours (d-tours) as semi-automated onboarding experiences for variable user expertise that preserve the user\u2019s agency, interest, and engagement. Our interactive tours concept draws from open-world game design to give the user freedom in choosing their path in the onboarding. We have implemented the concept in a tool called D-TOUR PROTOTYPE that allows authors to craft custom and interactive dashboard tours from scratch or using automatic templates. Automatically generated tours can still be customized to use different media (such as video, audio, or highlighting) or new narratives to produce a tailored onboarding experience for individual users or groups. We demonstrate the usefulness of interactive dashboard tours through use cases and expert interviews. The evaluation shows that the authors find the automation in the DTour prototype helpful and time-saving and the users find it engaging and intuitive. This paper and all supplemental materials are available at \\url{https://osf.io/6fbjp/}.","accessible_pdf":false,"authors":[{"affiliations":["Pro2Future GmbH, Linz, Austria","Johannes Kepler University, Linz, Austria"],"email":"vaishali.dhanoa@pro2future.at","is_corresponding":true,"name":"Vaishali Dhanoa"},{"affiliations":["Johannes Kepler University, Linz, Austria"],"email":"andreas.hinterreiter@jku.at","is_corresponding":false,"name":"Andreas Hinterreiter"},{"affiliations":["Johannes Kepler University, Linz, Austria"],"email":"vanessa.fediuk@jku.at","is_corresponding":false,"name":"Vanessa Fediuk"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"},{"affiliations":["Institute of Visual Computing "," Human-Centered Technology, Vienna, Austria"],"email":"groeller@cg.tuwien.ac.at","is_corresponding":false,"name":"Eduard Gr\u00f6ller"},{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"marc.streit@jku.at","is_corresponding":false,"name":"Marc Streit"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Vaishali Dhanoa"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1395","time_end":"","time_stamp":"","time_start":"","title":"D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding","uid":"v-full-1395","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization designers often rely on examples to explore the space of possible designs, yet we have little insight into how examples shape data visualization design outcomes. While the effects of examples have been studied in other disciplines, such as web design or engineering, the results are not readily applicable to visualization design due to inconsistencies in findings and challenges unique to visualization design. Towards bridging this gap, we conduct an exploratory experiment involving 32 data visualization designers focusing on the influence of five factors (timing, quantity, diversity, data topic similarity, and data schema similarity) on objectively measurable design outcomes (e.g., numbers of designs and idea transfers). Our quantitative analysis shows that when examples are introduced after initial brainstorming, designers curate examples with topics less similar to the dataset they are working on and produce more designs with a high variation in visualization components. Also, designers copy more ideas from examples with higher data schema similarities. Our qualitative analysis of participants\u2019 thought processes provides insights into why designers incorporate examples into their designs, revealing potential factors that have not been previously investigated. Finally, we discuss how our results inform future work on quantifying designs, improving measures of effectiveness, and supporting example-based visualization design. All supplementary materials are available at https://osf.io/sbp2k/?view_only=ca14af497f5845a0b1b2c616699fefc5","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"hbako@umd.edu","is_corresponding":true,"name":"Hannah K. Bako"},{"affiliations":["The University of Texas at Austin, Austin, United States"],"email":"xinyi.liu@utexas.edu","is_corresponding":false,"name":"Xinyi Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"gko1@terpmail.umd.edu","is_corresponding":false,"name":"Grace Ko"},{"affiliations":["Human Data Interaction Lab, College Park, United States"],"email":"hsong02@cs.umd.edu","is_corresponding":false,"name":"Hyemi Song"},{"affiliations":["University of Washington, Seattle, United States"],"email":"leibatt@cs.washington.edu","is_corresponding":false,"name":"Leilani Battle"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hannah K. Bako"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1414","time_end":"","time_stamp":"","time_start":"","title":"Unveiling How Examples Shape Data Visualization Design Outcomes","uid":"v-full-1414","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Various data visualization downstream applications such as reverse engineering and interactive authoring require a vocabulary that describes the structure of visualization scenes and the procedure to manipulate them. A few scene abstractions have been proposed, but they are restricted to specific applications for a limited set of visualization types. A unified and expressive model of data visualization scenes for different downstream applications has been missing. To fill this gap, we present Manipulable Semantic Components (MSC), a computational representation of data visualization scenes, to support applications in scene understanding and augmentation. MSC consists of two parts: a unified object model describing the structure of a visualization scene in terms of semantic components, and a set of operations to generate and modify the scene components. We demonstrate the benefits of MSC in three applications: visualization authoring, visualization deconstruction and reuse, and animation specification.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":true,"name":"Zhicheng Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"cchen24@umd.edu","is_corresponding":false,"name":"Chen Chen"},{"affiliations":["University of Maryland, College Park, United States"],"email":"hookerj100@gmail.com","is_corresponding":false,"name":"John Hooker"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zhicheng Liu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1416","time_end":"","time_stamp":"","time_start":"","title":"Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes","uid":"v-full-1416","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization items\u2014factual questions about visualizations that ask viewers to accomplish visualization tasks\u2014are regularly used in the field of information visualization as educational and evaluative materials. For example, researchers of visualization literacy require large, diverse banks of items to conduct studies where the same skill is measured repeatedly on the same participants. Yet, generating a large number of high-quality, diverse items requires significant time and expertise. To address the critical need for a large number of diverse visualization items in education and research, this paper investigates the potential for large language models (LLMs) to automate the generation of multiple-choice visualization items. Through an iterative design process, we develop an LLM-based pipeline, the VILA (Visualization Items Generated by Large LAnguage Models) pipeline, for efficiently generating visualization items that measure people\u2019s ability to accomplish visualization tasks. We use the VILA pipeline to generate 1,404 candidate items across 12 chart types and 13 visualization tasks. In collaboration with 11 visualization experts, we develop an evaluation rulebook which we then use to rate the quality of all candidate items. The result is a final bank, the VILA bank, of \u223c1,100 items. From this evaluation, we also identify and classify current limitations of LLMs in generating visualization items, and discuss the role of human oversight in ensuring quality. In addition, we demonstrate an application of our work by creating a visualization literacy test, VILA-VLAT, which measures people\u2019s ability to complete a diverse set of tasks on various types of visualizations; to show the potential of this application, we assess the convergent validity of VILA-VLAT by comparing it to the existing test VLAT via an online study (R = 0.70). Lastly, we discuss the application areas of the VILA pipeline and the VILA bank and provide practical recommendations for their use. All supplemental materials are available at https://osf.io/ysrhq/?view_only=e31b3ddf216e4351bb37bcedf744e9d6.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"yuancui2025@u.northwestern.edu","is_corresponding":true,"name":"Yuan Cui"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"wanqian.ge@northwestern.edu","is_corresponding":false,"name":"Lily W. Ge"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"yding5@wpi.edu","is_corresponding":false,"name":"Yiren Ding"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ltharrison@wpi.edu","is_corresponding":false,"name":"Lane Harrison"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"fumeng.p.yang@gmail.com","is_corresponding":false,"name":"Fumeng Yang"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuan Cui"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1422","time_end":"","time_stamp":"","time_start":"","title":"Promises and Pitfalls: Using Large Language Models to Generate Visualization Items","uid":"v-full-1422","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Comics have been shown to be an effective method for sequential data-driven storytelling, especially for dynamic graphs that change over time. However, manually creating a data-driven comic for a dynamic graph is currently time-consuming, complex, and error-prone. In this paper, we propose DG Comics, a novel comic authoring tool for dynamic graphs that allows users to semi-automatically build the comic and annotate it. The tool uses a hierarchical clustering algorithm that we newly developed for segmenting consecutive snapshots of the dynamic graph while preserving their chronological order. It also provides rich information on both individuals and communities extracted from dynamic graphs in multiple views, where users can explore dynamic graphs and choose what to tell in comics. For evaluation, we provide an example and report results from a user study and expert review.","accessible_pdf":false,"authors":[{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"joohee@unist.ac.kr","is_corresponding":true,"name":"Joohee Kim"},{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"gusdnr0916@unist.ac.kr","is_corresponding":false,"name":"Hyunwook Lee"},{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"ducnm@unist.ac.kr","is_corresponding":false,"name":"Duc M. Nguyen"},{"affiliations":["Australian National University, Canberra, Australia"],"email":"minjeong.shin@anu.edu.au","is_corresponding":false,"name":"Minjeong Shin"},{"affiliations":["IBM Research, Cambridge, United States"],"email":"bumchul.kwon@us.ibm.com","is_corresponding":false,"name":"Bum Chul Kwon"},{"affiliations":["UNIST, Ulsan, Korea, Republic of"],"email":"sako@unist.ac.kr","is_corresponding":false,"name":"Sungahn Ko"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Joohee Kim"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1425","time_end":"","time_stamp":"","time_start":"","title":"DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs","uid":"v-full-1425","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Numerical simulation serves as a cornerstone in scientific modeling, yet the process of fine-tuning simulation parameters poses significant challenges. Conventionally, parameter adjustment relies on extensive numerical simulations, data analysis, and expert insights, resulting in substantial computational costs and low efficiency. The emergence of deep learning in recent years has provided promising avenues for more efficient exploration of parameter spaces. However, existing approaches often lack intuitive methods for precise parameter adjustment and optimization. To tackle these challenges, we introduce ParamsDrag, a model that facilitates parameter space exploration through direct interaction with visualizations. Inspired by DragGAN, our ParamsDrag model operates in three steps. First, the generative component of ParamsDrag generates visualizations based on the input simulation parameters. Second, by directly dragging structure-related features in the visualizations, users can intuitively understand the controlling effect of different parameters. Third, with the understanding from the earlier step, users can steer ParamsDrag to produce dynamic visual outcomes. Through experiments conducted on real-world simulations and comparisons with state-of-the-art deep learning based approaches, we demonstrate the efficacy of our solution.","accessible_pdf":false,"authors":[{"affiliations":["Computer Network Information Center, Chinese Academy of Sciences, Beijing, China","University of Chinese Academy of Sciences, Beijing, China"],"email":"liguan@sccas.cn","is_corresponding":true,"name":"Guan Li"},{"affiliations":["Beijing Forestry University, Beijing, China"],"email":"leo_edumail@163.com","is_corresponding":false,"name":"Yang Liu"},{"affiliations":["Computer Network Information Center, Chinese Academy of Sciences, Beijing, China"],"email":"sgh@sccas.cn","is_corresponding":false,"name":"Guihua Shan"},{"affiliations":["Chinese Academy of Sciences, Beijing, China"],"email":"chengshiyu@cnic.cn","is_corresponding":false,"name":"Shiyu Cheng"},{"affiliations":["Beijing Forestry University, Beijing, China"],"email":"weiqun.cao@126.com","is_corresponding":false,"name":"Weiqun Cao"},{"affiliations":["Visa Research, Palo Alto, United States"],"email":"junpeng.wang.nk@gmail.com","is_corresponding":false,"name":"Junpeng Wang"},{"affiliations":["National Taiwan Normal University, Taipei City, Taiwan"],"email":"caseywang777@gmail.com","is_corresponding":false,"name":"Ko-Chih Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Guan Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1427","time_end":"","time_stamp":"","time_start":"","title":"ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging","uid":"v-full-1427","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Differential privacy ensures the security of individual privacy but poses challenges to data exploration processes because the limited privacy budget incapacitates the flexibility of exploration and the noisy feedback of data requests leads to confusing uncertainty. In this study, we take the lead in describing corresponding exploration scenarios, including underlying requirements and available exploration strategies. To facilitate practical applications, we propose a visual analysis approach to the formulation of exploration strategies. Our approach applies a reinforcement learning model to provide diverse suggestions for exploration strategies according to the exploration intent of users. A novel visual design for representing uncertainty in correlation patterns is integrated into our prototype system to support the proposed approach. Finally, we implemented a user study and two case studies. The results of these studies verified that our approach can help develop strategies that satisfy the exploration intent of users.","accessible_pdf":false,"authors":[{"affiliations":["Nankai University, Tianjin, China"],"email":"wangxumeng@nankai.edu.cn","is_corresponding":true,"name":"Xumeng Wang"},{"affiliations":["Nankai University, Tianjin, China"],"email":"jiaoshuangcheng@mail.nankai.edu.cn","is_corresponding":false,"name":"Shuangcheng Jiao"},{"affiliations":["Arizona State University, Tempe, United States"],"email":"cbryan16@asu.edu","is_corresponding":false,"name":"Chris Bryan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xumeng Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1438","time_end":"","time_stamp":"","time_start":"","title":"Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy","uid":"v-full-1438","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We are currently witnessing an increase in web-based, data-driven initiatives that explain complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. Many of these projects call themselves \"atlases\", a term that historically referred to collections of maps or scientific illustrations. To answer the question of what makes a \"visualization atlas\", we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of visualization atlases as an emerging format to present complex topics in a holistic, data-driven, and curated way through visualization, (2) a set of design patterns and design dimensions that led to (3) defining 5 visualization atlas genres, and (4) insights into the atlas creation from interviews. We found that visualization atlases are unique in that they combine exploratory visualization with narrative elements from data-driven storytelling and structured navigation mechanisms. They can act as a reference, communication or discovery tools targeting a wide range of audiences with different levels of domain knowledge. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases.","accessible_pdf":false,"authors":[{"affiliations":["The University of Edinburgh, Edinburgh, United Kingdom"],"email":"jinrui.w@outlook.com","is_corresponding":true,"name":"Jinrui Wang"},{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"uhinrich@ed.ac.uk","is_corresponding":false,"name":"Uta Hinrichs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jinrui Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1446","time_end":"","time_stamp":"","time_start":"","title":"Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration","uid":"v-full-1446","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present a systematic review, an empirical study, and a first set of considerations for designing visualizations in motion, derived from a concrete scenario in which these visualizations were used to support a primary task. In practice, when viewers are confronted with embedded visualizations, they often have to focus on a primary task and can only quickly glance at a visualization showing rich, often dynamically updated, information. As such, the visualizations must be designed so as not to distract from the primary task, while at the same time being readable and useful for aiding the primary task. For example, in games, players who are engaged in a battle have to look at their enemies but also read the remaining health of their own game character from the health bar over their character's head. Many trade-offs are possible in the design of embedded visualizations in such dynamic scenarios, which we explore in-depth in this paper with a focus on user experience. We use video games as an example of an application context with a rich existing set of visualizations in motion. We begin our work with a systematic review of in-game visualizations in motion. Next, we conduct an empirical user study to investigate how different embedded visualizations in motion designs impact user experience. We conclude with a set of considerations and trade-offs for designing visualizations in motion more broadly as derived from what we learned about video games. All supplemental materials of this paper are available at osf.io/3v8wm/.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"yaolijie0219@gmail.com","is_corresponding":true,"name":"Lijie Yao"},{"affiliations":["Univerisit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"federicabucchieri@gmail.com","is_corresponding":false,"name":"Federica Bucchieri"},{"affiliations":["Carleton University, Ottawa, Canada"],"email":"dieselfish@gmail.com","is_corresponding":false,"name":"Victoria McArthur"},{"affiliations":["LISN, Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"anastasia.bezerianos@universite-paris-saclay.fr","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lijie Yao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1451","time_end":"","time_stamp":"","time_start":"","title":"User Experience of Visualizations in Motion: A Case Study and Design Considerations","uid":"v-full-1451","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents a practical approach for the optimization of topological simplification, a central pre-processing step for the analysis and visualization of scalar data. Given an input scalar field f and a set of \u201csignal\u201d persistence pairs to maintain, our approaches produces an output field g that is close to f and which optimizes (i) the cancellation of \u201cnon-signal\u201d pairs, while (ii) preserving the \u201csignal\u201d pairs. In contrast to pre-existing simplification approaches, our method is not restricted to persistence pairs involving extrema and can thus address a larger class of topological features, in particular saddle pairs in three-dimensional scalar data. Our approach leverages recent generic persistence optimization frameworks and extends them with tailored accelerations specific to the problem of topological simplification. Extensive experiments report substantial accelerations over these frameworks, thereby making topological simplification optimization practical for real-life datasets. Our work enables a direct visualization and analysis of the topologically simplified data, e.g., via isosurfaces of simplified topology (fewer components and handles). We apply our approach to the extraction of prominent filament structures in three-dimensional data. Specifically, we show that our pre-simplification of the data leads to practical improvements over standard topological techniques for removing filament loops. We also show how our framework can be used to repair genus defects in surface processing. Finally, we provide a C++ implementation for reproducibility purposes.","accessible_pdf":false,"authors":[{"affiliations":["CNRS, Paris, France","SORBONNE UNIVERSITE, Paris, France"],"email":"mohamed.kissi@lip6.fr","is_corresponding":true,"name":"Mohamed KISSI"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"mathieu.pont@lip6.fr","is_corresponding":false,"name":"Mathieu Pont"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"josh@cs.arizona.edu","is_corresponding":false,"name":"Joshua A Levine"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mohamed KISSI"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1461","time_end":"","time_stamp":"","time_start":"","title":"A Practical Solver for Scalar Data Topological Simplification","uid":"v-full-1461","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, an approach for extracting and modeling visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines---DracoGPT-Rank and DracoGPT-Recommend---to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT models the preferences expressed by LLMs well, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantively diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and serve as a reliable and cost-effective stand-in for LLMs.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"wwill@cs.washington.edu","is_corresponding":true,"name":"Huichen Will Wang"},{"affiliations":["University of Washington, Seattle, United States"],"email":"mgord@cs.stanford.edu","is_corresponding":false,"name":"Mitchell L. Gordon"},{"affiliations":["University of Washington, Seattle, United States"],"email":"leibatt@cs.washington.edu","is_corresponding":false,"name":"Leilani Battle"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Huichen Will Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1472","time_end":"","time_stamp":"","time_start":"","title":"DracoGPT: Extracting Visualization Design Preferences from Large Language Models","uid":"v-full-1472","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Recent advancements in Large Language Models (LLMs) and Prompt Engineering have made chatbot customization more accessible, significantly reducing barriers to tasks that previously required programming skills. However, prompt evaluation, especially at the dataset scale, remains complex due to the need to assess prompts across thousands of test instances within a dataset. Our study, based on a comprehensive literature review and pilot study, summarized five critical challenges in prompt evaluation. In response, we introduce a feature-oriented workflow for systematic prompt evaluation, focusing on text summarization. Our workflow advocates feature metrics such as complexity, formality, or naturalness, instead of using traditional quality metrics like ROUGE. This design choice enables a more user-friendly evaluation of prompts, as it guides users in sorting through the ambiguity inherent in natural language. To support this workflow, we introduce Awesum, a visual analytics system that facilitates identifying optimal prompt refinements through interactive visualizations, featuring a novel Prompt Comparator design that employs a BubbleSet-inspired design enhanced by dimensionality reduction techniques. We evaluate the effectiveness and general applicability of the system with practitioners from various domains and found that (1) our design helps overcome the learning curve for non-technical people to conduct a systematic evaluation, and (2) our feature-oriented workflow has the potential to generalize to other NLG and image-generation tasks. For future works, we advocate moving towards feature-oriented evaluation of LLM prompts and discuss unsolved challenges in terms of human-agent interaction.","accessible_pdf":false,"authors":[{"affiliations":["University of California Davis, Davis, United States"],"email":"ytlee@ucdavis.edu","is_corresponding":true,"name":"Sam Yu-Te Lee"},{"affiliations":["University of California, Davis, Davis, United States"],"email":"abahukhandi@ucdavis.edu","is_corresponding":false,"name":"Aryaman Bahukhandi"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"dyuliu@ucdavis.edu","is_corresponding":false,"name":"Dongyu Liu"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"ma@cs.ucdavis.edu","is_corresponding":false,"name":"Kwan-Liu Ma"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sam Yu-Te Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1474","time_end":"","time_stamp":"","time_start":"","title":"Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts","uid":"v-full-1474","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We propose the notion of Attention-aware Visualizations (AAVs) that track the user's perception of a visual representation over time and feed this information back to the visualization.This idea is particularly useful for ubiquitous and immersive analytics where knowing which embedded visualizations the user is looking at can be used to make visualizations react appropriately to the user's attention: for example, by highlighting data the user has not yet seen. We can separate the approach into three components: (1) measuring the user's gaze on a visualization and its parts; (2) tracking the user's attention over time; and (3) reactively modifying the visual representation based on the current attention metric. In this paper, we present two separate implementations of AAV: a 2D numeric integration of attention for web-based visualizations that can use an embodied eye-tracker to capture the user's gaze, and a 3D implementation that uses the stencil buffer to track the visibility of each individual mark in a visualization. Both methods provide similar mechanisms for accumulating attention over time and changing the appearance of marks in response. We also present results from a controlled laboratory experiment studying different visual feedback mechanisms for attention.","accessible_pdf":false,"authors":[{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"arvind@cs.au.dk","is_corresponding":true,"name":"Arvind Srinivasan"},{"affiliations":["Aarhus University, Aarhus N, Denmark"],"email":"johannes@ellemose.eu","is_corresponding":false,"name":"Johannes Ellemose"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.butcher@bangor.ac.uk","is_corresponding":false,"name":"Peter W. S. Butcher"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.ritsos@bangor.ac.uk","is_corresponding":false,"name":"Panagiotis D. Ritsos"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arvind Srinivasan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1480","time_end":"","time_stamp":"","time_start":"","title":"Attention-Aware Visualization: Tracking and Responding to User Perception Over Time","uid":"v-full-1480","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Egocentric networks, often visualized as node-link diagrams, portray the complex relationship (link) dynamics between an entity (node) and others. However, common analytics tasks are multifaceted, encompassing interactions among four key aspects: strength, function, structure, and content. Current node-link visualization designs may fall short, focusing narrowly on certain aspects and neglecting the holistic, dynamic nature of egocentric networks. To bridge this gap, we introduce SpreadLine, a novel visualization framework designed to enable the visual exploration of egocentric networks from these four aspects at the microscopic level. Leveraging the intuitive appeal of storyline visualizations, SpreadLine adopts a storyline-based design to represent entities and their evolving relationships. We further encode essential topological information in the layout and condense the contextual information in a metro map metaphor, allowing for a more engaging and effective way to explore temporal and attribute-based information. To guide our work, with a thorough review of pertinent literature, we have distilled a task taxonomy that addresses the analytical needs specific to egocentric network exploration. Acknowledging the diverse analytical requirements of users, SpreadLine offers customizable encodings to enable users to tailor the framework for their tasks. We demonstrate the efficacy and general applicability of SpreadLine through three diverse real-world case studies and a usability study.","accessible_pdf":false,"authors":[{"affiliations":["University of California, Davis, Davis, United States"],"email":"yskuo@ucdavis.edu","is_corresponding":true,"name":"Yun-Hsin Kuo"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"dyuliu@ucdavis.edu","is_corresponding":false,"name":"Dongyu Liu"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"ma@cs.ucdavis.edu","is_corresponding":false,"name":"Kwan-Liu Ma"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yun-Hsin Kuo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1483","time_end":"","time_stamp":"","time_start":"","title":"SpreadLine: Visualizing Egocentric Dynamic Influence","uid":"v-full-1483","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Referential gestures, or as termed in linguistics, {\\em deixis}, are an essential part of communication around data visualizations. Despite their importance, such gestures are often overlooked when documenting data analysis meetings. Transcripts, for instance, fail to capture gestures, and video recordings may not adequately capture or emphasize them. We introduce a novel method for documenting collaborative data meetings that treats deixis as a first-class citizen. Our proposed framework captures cursor-based gestural data along with audio and converts them into interactive documents. The framework leverages a large language model to identify word correspondences with gestures. These identified references are used to create context-based annotations in the resulting interactive document. We assess the effectiveness of our proposed method through a user study, finding that participants preferred our automated interactive documentation over recordings, transcripts, and manual note-taking. Furthermore, we derive a preliminary taxonomy of cursor-based deictic gestures from participant actions during the study. This taxonomy offers further opportunities for better utilizing cursor-based deixis in collaborative data analysis scenarios.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"hatch.on27@gmail.com","is_corresponding":true,"name":"Chang Han"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chang Han"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1487","time_end":"","time_stamp":"","time_start":"","title":"A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations","uid":"v-full-1487","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A year ago, we submitted an IEEE VIS paper entitled \u201cSwaying the Public? Impacts of Election Forecast Visualizations on Emotion, Trust, and Intention in the 2022 U.S. Midterms\u201d [68], which was later bestowed with the honor of a best paper award. Yet, studying such a complex phenomenon required us to explore many more design paths than we could count, and certainly more than we could document in a single paper. This paper, then, is the unwritten prequel\u2014the backstory. It chronicles our journey from a simple idea\u2014to study visualizations for election forecasts\u2014through obstacles such as developing meaningfully different, easy-to-understand forecast visualizations, crafting professional-looking forecasts, and grappling with how to study perceptions of the forecasts before, during, and after the 2022 U.S. midterm elections. Our backstory began with developing a design space for two-party election forecasts, de\ufb01ning dimensions such as data transformations, visual channels, layouts, and types of animated narratives. We then qualitatively evaluated ten representative prototypes in this design space through interviews with 13 participants. The interviews yielded invaluable insights into how people interpret uncertainty visualizations and reason about probability in a U.S. election context, such as confounding win probability with vote share and erroneously forming connections between concrete visual representations (like dots) and real-world entities (like votes). Informed by these insights, we revised our prototypes to address ambiguity in interpreting visual encodings, particularly through the inclusion of extensive annotations. As we navigated these design paths, we contributed a design space and insights that may help others when designing uncertainty visualizations. We also hope that our design lessons and research process can inspire the research community when exploring topics related to designing visualizations for the general public.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"fumeng.p.yang@gmail.com","is_corresponding":true,"name":"Fumeng Yang"},{"affiliations":["Northwestern University, Evanston, United States","Northwestern University, Evanston, United States"],"email":"mandicai2028@u.northwestern.edu","is_corresponding":false,"name":"Mandi Cai"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"chloemortenson2026@u.northwestern.edu","is_corresponding":false,"name":"Chloe Rose Mortenson"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"hoda@u.northwestern.edu","is_corresponding":false,"name":"Hoda Fakhari"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"aysedlokmanoglu@gmail.com","is_corresponding":false,"name":"Ayse Deniz Lokmanoglu"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"nicholas.diakopoulos@gmail.com","is_corresponding":false,"name":"Nicholas Diakopoulos"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"erik.nisbet@northwestern.edu","is_corresponding":false,"name":"Erik Nisbet"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fumeng Yang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1488","time_end":"","time_stamp":"","time_start":"","title":"The Backstory to \u201cSwaying the Public\u201d: A Design Chronicle of Election Forecast Visualizations","uid":"v-full-1488","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Projecting high-dimensional vectors into two dimensions for visualization, known as embedding visualization, facilitates perceptual reasoning and interpretation. Comparison of multiple embedding visualizations drives decision-making in many domains, but conventional comparison methods are limited by a reliance on direct point correspondences. This requirement precludes embedding comparisons without point correspondences, such as two different datasets of annotated images, and fails to capture meaningful higher-level relationships among point groups. To address these shortcomings, we propose a general framework to compare embedding visualizations based on shared class labels rather than individual points. Our approach partitions points into regions corresponding to three key class concepts---confusion, neighborhood, and relative size---to characterize intra- and inter-class relationships. Informed by a preliminary user study, we realize an implementation of our framework using perceptual neighborhood graphs to define these regions and introduce metrics to quantify each concept. We demonstrate the generality of our framework with use cases from machine learning and single-cell biology, highlighting our metrics' ability to surface insightful comparisons across label hierarchies. To assess the effectiveness of our approach, we conducted a user study with five machine learning researchers and six single-cell biologists using an interactive and scalable prototype developed in Python and Rust. Our metrics enable more structured comparison through visual guidance and increased participants\u2019 confidence in their findings.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"trevor_manz@g.harvard.edu","is_corresponding":true,"name":"Trevor Manz"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"f.lekschas@gmail.com","is_corresponding":false,"name":"Fritz Lekschas"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"palmergreene@gmail.com","is_corresponding":false,"name":"Evan Greene"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"greg@ozette.com","is_corresponding":false,"name":"Greg Finak"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Trevor Manz"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1489","time_end":"","time_stamp":"","time_start":"","title":"A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies","uid":"v-full-1489","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Topological abstractions offer a method to summarize the behavior of vector fields, but computing them robustly can be challenging due to numerical precision issues. One alternative is to represent the vector field using a discrete approach, which constructs a collection of pairs of simplices in the input mesh that satisfies criteria introduced by Forman\u2019s discrete Morse theory. While numerous approaches exist to compute pairs in the restricted case of the gradient of a scalar field, state-of-the-art algorithms for the general case of vector fields require expensive optimization procedures. This paper introduces a fast, novel approach for pairing simplices of two-dimensional, triangulated vector fields that do not vary in time. The key insight of our approach is that we can employ a local evaluation, inspired by the approach used to construct a discrete gradient field, where every cell in a mesh is considered by no more than one of its vertices. Specifically, we observe that for any edge in the input mesh, we can uniquely assign an outward direction of flow. We can further expand this consistent notion of outward flow at each vertex, which corresponds to the concept of a downhill flow in the case of scalar fields. Working with outward flow enables a linear-time algorithm that processes the (outward) neighborhoods of each vertex one-by-one, similar to the approach used for scalar fields. We couple our approach to constructing discrete vector fields with a method to extract, simplify, and visualize topological features. Empirical results on analytic and simulation data demonstrate drastic improvements in running time, produce features similar to the current state-of-the-art, and show the application of simplification to large, complex flows.","accessible_pdf":false,"authors":[{"affiliations":["University of Arizona, Tucson, United States"],"email":"finkent@arizona.edu","is_corresponding":true,"name":"Tanner Finken"},{"affiliations":["Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"josh@cs.arizona.edu","is_corresponding":false,"name":"Joshua A Levine"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tanner Finken"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1494","time_end":"","time_stamp":"","time_start":"","title":"Localized Evaluation for Constructing Discrete Vector Fields","uid":"v-full-1494","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Haptic feedback provides an essential sensory stimulus crucial for interacting and analyzing three-dimensional spatio-temporal phenomena on surface visualizations. Given its ability to provide enhanced spatial perception and scene maneuverability, virtual reality (VR) catalyzes haptic interactions on surface visualizations. Various interaction modes, encompassing both mid-air and on-surface interactions---with or without the application of assisting force stimuli---have been explored using haptic force feedback devices. In this paper, we evaluate the use of on-surface and assisted on-surface haptic modes of interaction compared to a no-haptic interaction mode. A force-based haptic stylus is used for all three modalities; the on-surface mode uses collision based forces, whereas the assisted on-surface mode is accompanied by an additional snapping force. We conducted a within-subjects user study involving fundamental interaction tasks performed on surface visualizations. Keeping a consistent visual design across all three modes, our study incorporates tasks that require the localization of the highest, lowest, and random points on surfaces; and tasks that focus on brushing curves on surfaces with varying complexity and occlusion levels. Our findings show that participants took almost the same time to brush curves using all the interaction modes. They could draw smoother curves using the on-surface interaction modes compared to the no-haptic mode. However, the assisted on-surface mode provided better accuracy than the on-surface mode. The on-surface mode was slower in point localization, but the accuracy depended on the visual cues and occlusions associated with the tasks. Finally, we discuss participant feedback on using haptic force feedback as a tangible input modality and share takeaways to aid the design of haptics-based tangible interactions for surface visualizations.","accessible_pdf":false,"authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"hamza.afzaal@ucalgary.ca","is_corresponding":true,"name":"Hamza Afzaal"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"ualim@ucalgary.ca","is_corresponding":false,"name":"Usman Alim"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hamza Afzaal"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1500","time_end":"","time_stamp":"","time_start":"","title":"Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations","uid":"v-full-1500","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization is widely used for exploring personal data, but many visualization authoring systems do not support expressing data in flexible, personal, and organic layouts. Sketching is an accessible tool for experimenting with visualization designs, but formalizing sketched elements into structured data representations is difficult, as modifying hand-drawn glyphs to encode data when available is labour-intensive and error prone. We propose an approach where authors structure their own expressive templates, capturing implicit style as well as explicit data mappings, through sketching a representative visualization for an envisioned or partial dataset. Our approach seeks to support freeform exploration and partial specification, balanced against interactive machine support for specifying the generative procedural rules. We implement this approach in DataGarden, a system designed to support hierarchical data visualizations, and evaluate it with 12 participants in a reproduction study and four experts in a freeform creative task. Participants readily picked up the core idea of template authoring, and the variety of workflows we observed highlight how this process serves design and data ideation as well as visual constraint iteration. We discuss challenges in implementing the design considerations underpinning DataGarden, and illustrate its potential in a gallery of visualizations generated from authored templates.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, Orsay, France"],"email":"anna.offenwanger@gmail.com","is_corresponding":true,"name":"Anna Offenwanger"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Inria, LISN, Orsay, France"],"email":"theophanis.tsandilas@inria.fr","is_corresponding":false,"name":"Theophanis Tsandilas"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":false,"name":"Fanny Chevalier"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anna Offenwanger"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1502","time_end":"","time_stamp":"","time_start":"","title":"DataGarden: Formalizing Personal Sketches into Structured Visualization Templates","uid":"v-full-1502","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The increasing reliance on Large Language Models (LLMs) for health information seeking can pose severe risks due to the potential for misinformation and the complexity of these topics. This paper introduces KnowNet, a visualization system that integrates LLMs with Knowledge Graphs (KG) to provide enhanced accuracy and structured exploration. One core idea in KnowNet is to conceptualize the understanding of a subject as the gradual construction of graph visualization, aligning the user's cognitive process with both the structured data in KGs and the unstructured outputs from LLMs. Specifically, we extracted triples (e.g., entities and their relations) from LLM outputs and mapped them into the validated information and supported evidence in external KGs. Based on the neighborhood of the currently explored entities in KGs, KnowNet provides recommendations for further inquiry, aiming to guide a comprehensive understanding without overlooking critical aspects. A progressive graph visualization is proposed to show the alignment between LLMs and KGs, track previous inquiries, and connect this history with current queries and next-step recommendations. We demonstrate the effectiveness of our system via use cases and expert interviews.","accessible_pdf":false,"authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"yan00111@umn.edu","is_corresponding":false,"name":"Youfu Yan"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"hou00127@umn.edu","is_corresponding":false,"name":"Yu Hou"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"xiao0290@umn.edu","is_corresponding":false,"name":"Yongkang Xiao"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"zhan1386@umn.edu","is_corresponding":false,"name":"Rui Zhang"},{"affiliations":["University of Minnesota, Minneapolis , United States"],"email":"qianwen@umn.edu","is_corresponding":true,"name":"Qianwen Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qianwen Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1503","time_end":"","time_stamp":"","time_start":"","title":"Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration","uid":"v-full-1503","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A wide range of visualization authoring interfaces enable the creation of highly customized visualizations. However, prioritizing expressiveness often impedes the learnability of the authoring interface. The diversity of users, such as varying computational skills and prior experiences in user interfaces, makes it even more challenging for a single authoring interface to satisfy the needs of a broad audience. In this paper, we introduce a framework to balance learnability and expressivity in a visualization authoring system. Adopting insights from learnability studies, such as multimodal interaction and visualization literacy, we explore the design space of blending multiple visualization authoring interfaces for supporting authoring tasks in a complementary and flexible manner. To evaluate the effectiveness of blending interfaces, we implemented a proof-of-concept system, Blace, that combines four common visualization authoring interfaces\u2014template-based, shelf configuration, natural language, and code editor\u2014that are tightly linked to one another to help users easily relate unfamiliar interfaces to more familiar ones. Using the system, we conducted a user study with 12 domain experts who regularly visualize genomics data as part of their analysis workflow. Participants with varied visualization and programming backgrounds were able to successfully reproduce complex visualization examples without a guided tutorial in the study. Feedback from a post-study qualitative questionnaire further suggests that blending interfaces enabled participants to learn the system easily and assisted them in confidently editing unfamiliar visualization grammar in the code editor, enabling expressive customization. Reflecting on our study results and the design of our system, we discuss the different interaction patterns that we identified and design implications for blending visualization authoring interfaces.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"sehi_lyi@hms.harvard.edu","is_corresponding":true,"name":"Sehi L'Yi"},{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.v.d.brandt@tue.nl","is_corresponding":false,"name":"Astrid van den Brandt"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"etowah_adams@hms.harvard.edu","is_corresponding":false,"name":"Etowah Adams"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"huyen_nguyen@hms.harvard.edu","is_corresponding":false,"name":"Huyen N. Nguyen"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sehi L'Yi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1504","time_end":"","time_stamp":"","time_start":"","title":"Learnable and Expressive Visualization Authoring Through Blended Interfaces","uid":"v-full-1504","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Despite the recent surge of research efforts to make data visualizations accessible to people who are blind or have low-vision (BLV), how to support BLV people's data analysis remains an important and challenging question. As refreshable tactile displays (RTDs) become cheaper and conversational agents continue to improve, their combination provides a promising approach to support BLV people's interactive data analysis. To understand how BLV people would use and react to a system combining an RTD with a conversational agent, we conducted a Wizard-of-Oz study with 11 BLV participants involving line graphs, bar charts, and isarithmic maps. From an analysis of participant interactions, we identified nine distinct patterns and learned that the choice of modalities depended on the type of task and prior experience with tactile graphics. We also found that participants strongly preferred the combination of RTD and speech to a single modality, and that participants with more tactile experience described how tactile images facilitated deeper engagement with the data and supported independent interpretation. Our findings will inform the design of interfaces for such interactive mixed-modality systems.","accessible_pdf":false,"authors":[{"affiliations":["Monash University, Melbourne, Australia"],"email":"samuel.reinders@monash.edu","is_corresponding":true,"name":"Samuel Reinders"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"matthew.butler@monash.edu","is_corresponding":false,"name":"Matthew Butler"},{"affiliations":["Monash University, Clayton, Australia"],"email":"ingrid.zukerman@monash.edu","is_corresponding":false,"name":"Ingrid Zukerman"},{"affiliations":["Yonsei University, Seoul, Korea, Republic of","Microsoft Research, Redmond, United States"],"email":"b.lee@yonsei.ac.kr","is_corresponding":false,"name":"Bongshin Lee"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"lizhen.qu@monash.edu","is_corresponding":false,"name":"Lizhen Qu"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"kim.marriott@monash.edu","is_corresponding":false,"name":"Kim Marriott"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Samuel Reinders"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1522","time_end":"","time_stamp":"","time_start":"","title":"When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech","uid":"v-full-1522","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We introduce DiffFit, a differentiable algorithm for fitting protein atomistic structures into experimental reconstructed Cryo-Electron Microscopy (cryo-EM) volume map. This process is essential in structural biology to semi-automatically reconstruct large meso-scale models of complex protein assemblies and complete cellular structures that are based on measured cryo-EM data. Current approaches require manual fitting in 3D that already results in approximately aligned structures followed by an automated fine-tuning of the alignment. With our DiffFit approach, we enable domain scientists to automatically fit new structures and visualize the fitting results for inspection and interactive revision. Our fitting begins with differentiable 3D rigid transformations of the protein atom coordinates, followed by sampling the density values at its atom coordinates from the target cryo-EM volume. To ensure a meaningful correlation between the sampled densities and the protein structure, we propose a novel loss function based on a multi-resolution volume-array approach and the exploitation of the negative space. Such loss function serves as a critical metric for assessing the fitting quality, ensuring both fitting accuracy and improved visualization of the results. We assessed the placement quality of DiffFit with several large, realistic datasets and found its quality to be superior to that of previous methods. We further evaluated our method in two use cases. First, we demonstrate its use in the process of automating the integration of known composite structures into larger protein complexes. Second, we show that it facilitates the fitting of predicted protein domains into volume densities to aid researchers in the identification of unknown proteins. We implemented our algorithm as an open-source plugin (github.com/nanovis/DiffFitViewer) in ChimeraX, a leading visualization software in the field. All supplemental materials are available at osf.io/5tx4q.","accessible_pdf":false,"authors":[{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"deng.luo@kaust.edu.sa","is_corresponding":true,"name":"Deng Luo"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"zainab.alsuwaykit@kaust.edu.sa","is_corresponding":false,"name":"Zainab Alsuwaykit"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"dawar.khan@kaust.edu.sa","is_corresponding":false,"name":"Dawar Khan"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"ondrej.strnad@kaust.edu.sa","is_corresponding":false,"name":"Ond\u0159ej Strnad"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"ivan.viola@kaust.edu.sa","is_corresponding":false,"name":"Ivan Viola"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Deng Luo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1533","time_end":"","time_stamp":"","time_start":"","title":"DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map","uid":"v-full-1533","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Large Language Models (LLMs) have been successfully adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways from visualizations? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as the spatial arrangement. In this work, we examine how well LLMs can predict such design choice sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We test four common chart arrangements: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked, through three experimental phases. In Phase I, we identified the optimal configuration of LLMs to generate meaningful chart takeaways, across three LLM models (GPT3.5, GPT4, GPT4V, and Gemini 1.0 Pro), two temperature settings (0, 0.7), four chart specifications (Vega-Lite, Matplotlib, ggplot2, and scene graphs), and several prompting strategies. We found that even state-of-the-art LLMs can struggle to generate factually accurate takeaways. In Phase 2, using the most optimal LLM configuration, we generated 30 chart takeaways across the four arrangements of bar charts using two datasets, with both zero-shot and one-shot settings. Compared to data on human takeaways from prior work, we found that the takeaways LLMs generate often do not align with human comparisons. In Phase 3, we examined the effect of the charts\u2019 underlying data values on takeaway alignment between humans and LLMs, and found both matches and mismatches. Overall, our work evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human-aligned chart takeaways.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"wwill@cs.washington.edu","is_corresponding":true,"name":"Huichen Will Wang"},{"affiliations":["Adobe Research, Seattle, United States"],"email":"jhoffs@adobe.com","is_corresponding":false,"name":"Jane Hoffswell"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"yukithane@gmail.com","is_corresponding":false,"name":"Sao Myat Thazin Thane"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"victorbursztyn2022@u.northwestern.edu","is_corresponding":false,"name":"Victor S. Bursztyn"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"cxiong@gatech.edu","is_corresponding":false,"name":"Cindy Xiong Bearfield"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Huichen Will Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1544","time_end":"","time_stamp":"","time_start":"","title":"How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts","uid":"v-full-1544","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visual validation of regression models in scatterplots is a common practice for assessing model quality, yet its efficacy remains unquantified. We conducted two empirical experiments to investigate individuals' ability to visually validate linear regression models (linear trends) and to examine the impact of common visualization designs on validation quality. The first experiment showed that the level of accuracy for visual estimation of slope (i.e., fitting a line to data) is higher than for visual validation of slope (i.e., accepting a shown line). Notably, we found bias toward slopes that are ''too steep'' in both cases. This lead to novel insights that participants naturally assessed regression with orthogonal distances between the points and the line (i.e., ODR regression) rather than the common vertical distances (OLS regression). In the second experiment, we investigated whether incorporating common designs for regression visualization (error lines, bounding boxes, and confidence intervals) would improve visual validation. Even though error lines reduced validation bias, results failed to show the desired improvements in accuracy for any design. Overall, our findings suggest caution in using visual model validation for linear trends in scatterplots.","accessible_pdf":false,"authors":[{"affiliations":["University of Cologne, Cologne, Germany"],"email":"braun@cs.uni-koeln.de","is_corresponding":true,"name":"Daniel Braun"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"},{"affiliations":["University of Wisconsin - Madison, Madison, United States"],"email":"gleicher@cs.wisc.edu","is_corresponding":false,"name":"Michael Gleicher"},{"affiliations":["University of Cologne, Cologne, Germany"],"email":"landesberger@cs.uni-koeln.de","is_corresponding":false,"name":"Tatiana von Landesberger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Braun"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1547","time_end":"","time_stamp":"","time_start":"","title":"Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots","uid":"v-full-1547","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns in dimensionality reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections.","accessible_pdf":false,"authors":[{"affiliations":["Tufts University, Medford, United States"],"email":"brianmontambault@gmail.com","is_corresponding":true,"name":"Brian Montambault"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["Tufts University, Boston, United States"],"email":"jen@cs.tufts.edu","is_corresponding":false,"name":"Jen Rogers"},{"affiliations":["Tufts University, Medford, United States"],"email":"camelia_daniela.brumar@tufts.edu","is_corresponding":false,"name":"Camelia D. Brumar"},{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"mingwei.li@tufts.edu","is_corresponding":false,"name":"Mingwei Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Brian Montambault"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1568","time_end":"","time_stamp":"","time_start":"","title":"DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic","uid":"v-full-1568","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Effective security patrol management is critical for ensuring safety in diverse environments such as art galleries, airports, and factories. The behavior of patrols in these situations can be modeled by patrolling games. They simulate the behavior of the patrol and adversary in the building, which is modeled as a graph of interconnected nodes representing rooms. The designers of algorithms solving the game face the problem of analyzing complex graph layouts with temporal dependencies. Therefore, appropriate visual support is crucial for them to work effectively. In this paper, we present a novel tool that helps the designers of patrolling games explore the outcomes of the proposed algorithms and approaches, evaluate their success rate, and propose modifications that can improve their solutions. Our tool offers an intuitive and interactive interface, featuring a detailed exploration of patrol routes and probabilities of taking them, simulation of patrols, and other requested features. In close collaboration with experts in designing patrolling games, we conducted three case studies demonstrating the usage and usefulness of our tool. The prototype of the tool, along with exemplary datasets, is available at https://gitlab.fi.muni.cz/formela/strategy-vizualizer.","accessible_pdf":false,"authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"langm@mail.muni.cz","is_corresponding":true,"name":"Mat\u011bj Lang"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"469242@mail.muni.cz","is_corresponding":false,"name":"Adam \u0160t\u011bp\u00e1nek"},{"affiliations":["Faculty of Informatics, Masaryk University, Brno, Czech Republic"],"email":"514179@mail.muni.cz","is_corresponding":false,"name":"R\u00f3bert Zvara"},{"affiliations":["Faculty of Informatics, Masaryk University, Brno, Czech Republic"],"email":"rehak@fi.muni.cz","is_corresponding":false,"name":"Vojt\u011bch \u0158eh\u00e1k"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mat\u011bj Lang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1571","time_end":"","time_stamp":"","time_start":"","title":"Who Let the Guards Out: Visual Support for Patrolling Games","uid":"v-full-1571","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The numerical extraction of vortex cores from time-dependent fluid flow attracted much attention over the past decades. A commonly agreed upon vortex definition remained elusive since a proper vortex core needs to satisfy two hard constraints: it must be objective and Lagrangian. Recent methods on objectivization met the first but not the second constraint, since there was no formal guarantee that the resulting vortex coreline is indeed a pathline of the fluid flow. In this paper, we propose the first vortex core definition that is both objective and Lagrangian. Our approach restricts observer motions to follow along pathlines, which reduces the degrees of freedoms: we only need to optimize for an observer rotation that makes the observed flow as steady as possible. This optimization succeeds along Lagrangian vortex corelines and will result in a non-zero time-partial everywhere else. By performing this optimization at each point of a spatial grid, we obtain a residual scalar field, which we call vortex deviation error. The local minima on the grid serve as seed points for a gradient descent optimization that delivers sub-voxel accurate corelines. The visualization of both 2D and 3D vortex cores is based on the separation of the movement of the vortex core and the swirling flow behavior around it. While the vortex core is represented by a pathline, the swirling motion around it is visualized by streamlines in the correct frame. We demonstrate the utility of the approach on several 2D and 3D time-dependent vector fields.","accessible_pdf":false,"authors":[{"affiliations":["Friedrich-Alexander-University Erlangen-N\u00fcrnberg, Erlangen, Germany"],"email":"tobias.guenther@fau.de","is_corresponding":true,"name":"Tobias G\u00fcnther"},{"affiliations":["University of Magdeburg, Magdeburg, Germany"],"email":"theisel@ovgu.de","is_corresponding":false,"name":"Holger Theisel"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias G\u00fcnther"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1574","time_end":"","time_stamp":"","time_start":"","title":"Objective Lagrangian Vortex Cores and their Visual Representations","uid":"v-full-1574","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The visualization community has a rich history of reflecting upon visualization design flaws. Although research in this area has remained lively, we believe it is essential to continuously revisit this classic and critical topic in visualization research by incorporating more empirical evidence from diverse sources, characterizing new design flaws, building more systematic theoretical frameworks, and understanding the underlying reasons for these flaws. To address the above gaps, this work investigated visualization design flaws through the lens of the public, constructed a framework to summarize and categorize the identified flaws, and explored why these flaws occur. Specifically, we analyzed 2227 flawed data visualizations collected from an online gallery and derived a design task-associated taxonomy containing 76 specific design flaws. These flaws were further classified into three high-level categories (i.e., misinformation, uninformativeness, unsociability) and ten subcategories (e.g., inaccuracy, unfairness, ambiguity). Next, we organized five focus groups to explore why these design flaws occur and identified seven causes of the flaws. Finally, we proposed a research agenda for combating visualization design flaws and summarize nine research opportunities.","accessible_pdf":false,"authors":[{"affiliations":["Fudan University, Shanghai, China","Fudan University, Shanghai, China"],"email":"xingyulan96@gmail.com","is_corresponding":true,"name":"Xingyu Lan"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom","University of Edinburgh, Edinburgh, United Kingdom"],"email":"coraline.liu.dataviz@gmail.com","is_corresponding":false,"name":"Yu Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xingyu Lan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1594","time_end":"","time_stamp":"","time_start":"","title":"I Came Across a Junk: Understanding Design Flaws of Data Visualization from the Public's Perspective","uid":"v-full-1594","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Assigning discriminable and harmonic colors to samples according to their class labels and spatial distribution can generate attractive visualizations and facilitate data exploration. However, as the number of classes increases, it is challenging to generate a high-quality color assignment result that accommodates all classes simultaneously. A practical solution is to organize classes into a hierarchy and then dynamically assign colors during exploration. However, existing color assignment methods fall short in generating high-quality color assignment results and dynamically aligning them with hierarchical structures. To address this issue, we develop a dynamic color assignment method for hierarchical data, which is formulated as a multi-objective optimization problem. This method simultaneously considers color discriminability, color harmony, and spatial distribution at each hierarchical level. By using the colors of parent classes to guide the color assignment of their child classes, our method further promotes both consistency and clarity across hierarchical levels. We demonstrate the effectiveness of our method in generating dynamic color assignment results with quantitative experiments and a user study.","accessible_pdf":false,"authors":[{"affiliations":["Tsinghua University, Beijing, China"],"email":"jiashu0717c@gmail.com","is_corresponding":true,"name":"Jiashu Chen"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"vicayang496@gmail.com","is_corresponding":false,"name":"Weikai Yang"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"jiazl22@mails.tsinghua.edu.cn","is_corresponding":false,"name":"Zelin Jia"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"tarolancy@gmail.com","is_corresponding":false,"name":"Lanxi Xiao"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"shixia@tsinghua.edu.cn","is_corresponding":false,"name":"Shixia Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jiashu Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1595","time_end":"","time_stamp":"","time_start":"","title":"Dynamic Color Assignment for Hierarchical Data","uid":"v-full-1595","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In understanding and redesigning the function of proteins in modern biochemistry, protein engineers are increasingly focusing on exploring regions in proteins called loops. Analyzing various characteristics of these regions helps the experts design the transfer of the desired function from one protein to another. This process is denoted as loop grafting. We designed a set of interactive visualizations that provide experts with visual support through all the loop grafting pipeline steps. The workflow is divided into several phases, reflecting the steps of the pipeline. Each phase is supported by a specific set of abstracted 2D visual representations of proteins and their loops that are interactively linked with the 3D View of proteins. By sequentially passing through the individual phases, the user shapes the list of loops that are potential candidates for loop grafting. Finally, the actual in-silico insertion of the loop candidates from one protein to the other is performed, and the results are visually presented to the user. In this way, the fully computational rational design of proteins and their loops results in newly designed protein structures that can be further assembled and tested through in-vitro experiments. We showcase the contribution of our visual support design on a real case scenario changing the enantiomer selectivity of the engineered enzyme. Moreover, we provide the readers with the experts' feedback.","accessible_pdf":false,"authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kiraa@mail.muni.cz","is_corresponding":false,"name":"Filip Op\u00e1len\u00fd"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"paloulbrich@gmail.com","is_corresponding":false,"name":"Pavol Ulbrich"},{"affiliations":["Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital, Brno, Czech Republic"],"email":"joan.planas@mail.muni.cz","is_corresponding":false,"name":"Joan Planas-Iglesias"},{"affiliations":["Masaryk University, Brno, Czech Republic","University of Bergen, Bergen, Norway"],"email":"xbyska@fi.muni.cz","is_corresponding":false,"name":"Jan By\u0161ka"},{"affiliations":["Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital, Brno, Czech Republic"],"email":"stourac.jan@gmail.com","is_corresponding":false,"name":"Jan \u0160toura\u010d"},{"affiliations":["Faculty of Science, Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital Brno, Brno, Czech Republic"],"email":"222755@mail.muni.cz","is_corresponding":false,"name":"David Bedn\u00e1\u0159"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"katarina.furmanova@gmail.com","is_corresponding":true,"name":"Katar\u00edna Furmanov\u00e1"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Katar\u00edna Furmanov\u00e1"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1597","time_end":"","time_stamp":"","time_start":"","title":"Visual Support for the Loop Grafting Workflow on Proteins","uid":"v-full-1597","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Existing deep learning-based surrogate models facilitate efficient data generation, but fall short in uncertainty quantification, efficient parameter space exploration, and reverse prediction. In our work, we introduce SurroFlow, a novel normalizing flow-based surrogate model, to learn the invertible transformation between simulation parameters and simulation outputs. The model not only allows accurate predictions of simulation outcomes for a given simulation parameter but also supports uncertainty quantification in the data generation process. Additionally, it enables efficient simulation parameter recommendation and exploration. We integrate SurroFlow and a genetic algorithm as the backend of a visual interface to support effective user-guided ensemble simulation exploration and visualization. Our framework significantly reduces the computational costs while enhancing the reliability and exploration capabilities of scientific surrogate models.","accessible_pdf":false,"authors":[{"affiliations":["The Ohio State University, Columbus, United States","The Ohio State University, Columbus, United States"],"email":"shen.1250@osu.edu","is_corresponding":true,"name":"JINGYI SHEN"},{"affiliations":["The Ohio State University, Columbus, United States","The Ohio State University, Columbus, United States"],"email":"duan.418@osu.edu","is_corresponding":false,"name":"Yuhan Duan"},{"affiliations":["The Ohio State University , Columbus , United States","The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["JINGYI SHEN"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1599","time_end":"","time_stamp":"","time_start":"","title":"SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification","uid":"v-full-1599","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Multi-modal embeddings form the foundation for vision-language models, such as CLIP embeddings, the most widely used text-image embeddings. However, these embeddings are hard to interpret and vulnerable to subtle misalignment of cross-modal features, resulting in decreased model performance and diminished generalization. To address this problem, we design ModalChorus, an interactive system for visual probing and alignment of multi-modal embeddings. ModalChorus primarily offers a two-stage process: 1) embedding probing with Modal Fusion Map (MFM), a novel parametric dimensionality reduction method that integrates both metric and nonmetric objectives to enhance modality fusion; and 2) embedding alignment that allows users to interactively articulate intentions for both point-set and set-set alignments. Quantitative and qualitative comparisons for CLIP embeddings with existing dimensionality reduction (e.g., t-SNE and MDS) and data fusion (e.g., data context map) methods demonstrate the advantages of MFM in showcasing cross-modal features over common vision-language datasets. Case studies reveal that ModalChorus can facilitate intuitive discovery of misalignment and efficient re-alignment in scenarios ranging from zero-shot classification to cross-modal retrieval and generation.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"yyebd@connect.ust.hk","is_corresponding":true,"name":"Yilin Ye"},{"affiliations":["The Hong Kong University of Science and Technology(Guangzhou), Guangzhou, China"],"email":"sxiao713@connect.hkust-gz.edu.cn","is_corresponding":false,"name":"Shishi Xiao"},{"affiliations":["the Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"xingchen.zeng@outlook.com","is_corresponding":false,"name":"Xingchen Zeng"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China","The Hong Kong University of Science and Technology, Hong Kong SAR, China"],"email":"weizeng@hkust-gz.edu.cn","is_corresponding":false,"name":"Wei Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yilin Ye"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1603","time_end":"","time_stamp":"","time_start":"","title":"ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map","uid":"v-full-1603","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"With the increase of graph size, it becomes difficult or even impossible to visualize graph structures clearly within the limited screen space. Consequently, it is crucial to design effective visual representations for large graphs. In this paper, we propose AdaMotif, a novel approach that can capture the essential structure patterns of large graphs and effectively reveal the overall structures via adaptive motif designs. Specifically, our approach involves partitioning a given large graph into multiple subgraphs, then clustering similar subgraphs and extracting similar structural information within each cluster. Subsequently, adaptive motifs representing each cluster are generated and utilized to replace the corresponding subgraphs, leading to a simplified visualization. Our approach aims to preserve as much information from the subgraphs as possible, effectively simplifying graphs while minimizing information loss. Notably, our approach successfully visualizes crucial community information within a large graph. We conduct case studies and a user study using both synthetic and real-world graphs to validate the effectiveness of our proposed approach. The results demonstrate the capability of our approach in simplifying graphs while retaining important structural and community information.","accessible_pdf":false,"authors":[{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"hzhou@szu.edu.cn","is_corresponding":true,"name":"Hong Zhou"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"laipeifeng1111@gmail.com","is_corresponding":false,"name":"Peifeng Lai"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"zhida.sun@connect.ust.hk","is_corresponding":false,"name":"Zhida Sun"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"2310274034@email.szu.edu.cn","is_corresponding":false,"name":"Xiangyuan Chen"},{"affiliations":["Shenzhen University, Shen Zhen, China"],"email":"275621136@qq.com","is_corresponding":false,"name":"Yang Chen"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"hswu@szu.edu.cn","is_corresponding":false,"name":"Huisi Wu"},{"affiliations":["Nanyang Technological University, Singapore, Singapore"],"email":"yong-wang@ntu.edu.sg","is_corresponding":false,"name":"Yong WANG"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hong Zhou"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1606","time_end":"","time_stamp":"","time_start":"","title":"AdaMotif: Graph Simplification via Adaptive Motif Design","uid":"v-full-1606","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Partitionings (or segmentations) divide a given domain into disjoint connected regions whose union forms again the entire domain. Multi-dimensional partitionings occur, for example, when analyzing parameter spaces of simulation models, where each segment of the partitioning represents a region of similar model behavior. Having computed a partitioning, one is commonly interested in understanding how large the segments are and which segments lie next to each other. While visual representations of 2D domain partitionings that reveal sizes and neighborhoods are straightforward, this is no longer the case when considering multi-dimensional domains of three or more dimensions. We propose an algorithm for computing 2D embeddings of multi-dimensional partitionings. The embedding shall have the following properties: It shall maintain the topology of the partitioning and optimize the area sizes and joint boundary lengths of the embedded segments to match the respective sizes and lengths in the multi-dimensional domain. We demonstrate the effectiveness of our approach by applying it to different use cases, including the visual exploration of 3D spatial domain segmentations and multi-dimensional parameter space partitionings of simulation ensembles. We numerically evaluate our algorithm with respect to how well sizes and lengths are preserved depending on the dimensionality of the domain and the number of segments.","accessible_pdf":false,"authors":[{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"m_ever14@uni-muenster.de","is_corresponding":true,"name":"Marina Evers"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"linsen@uni-muenster.de","is_corresponding":false,"name":"Lars Linsen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Marina Evers"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1612","time_end":"","time_stamp":"","time_start":"","title":"2D Embeddings of Multi-dimensional Partitionings","uid":"v-full-1612","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present a path-based design model and system for designing and creating visualisations. Our model represents a systematic approach to constructing visual representations of data or concepts following a predefined sequence of steps. The initial step involves outlining the overall appearance of the visualisation by creating a skeleton structure, referred to as a flowpath. Subsequently, we specify objects, visual marks, properties, and appearance, storing them in a gene. Lastly, we map data onto the flowpath, ensuring suitable morphisms. Alternative designs are created by exchanging values in the gene. For example, designs that share similar traits, are created by making small incremental changes to the gene. Our design method develops a wide variety of creative ideas, space-filling visualisations, and traditional designs (bar chart, pie chart etc.) Our implementation, demonstrates the model, and we apply the output visualisations onto a smart-watch and on visualisation dashboards. In this article we (1) introduce, define and explain the path model and discuss possibilities for its use, (2) present our implementation, results, and evaluation, and (3) demonstrate and evaluate an application of its use on a mobile watch.","accessible_pdf":false,"authors":[{"affiliations":["ExaDev, Gaerwen, United Kingdom","Bangor University, Bangor, United Kingdom"],"email":"james.ogge@gmail.com","is_corresponding":false,"name":"James R Jackson"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.ritsos@bangor.ac.uk","is_corresponding":false,"name":"Panagiotis D. Ritsos"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.butcher@bangor.ac.uk","is_corresponding":false,"name":"Peter W. S. Butcher"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"j.c.roberts@bangor.ac.uk","is_corresponding":true,"name":"Jonathan C Roberts"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jonathan C Roberts"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1613","time_end":"","time_stamp":"","time_start":"","title":"Path-based Design Model for Constructing and Exploring Alternative Visualisations","uid":"v-full-1613","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present Cell2Cell, a novel visual analytics approach for quantifying and visualizing networks of cell-cell interactions in three-dimensional (3D) multi-channel cancerous tissue data. By analyzing cellular interactions, biomedical domain experts can gain a more accurate understanding of the intricate relationships between cancer and immune cells. Recent methods have focused on inferring interaction based on the proximity of cells in low-resolution 2D multi-channel imaging data. By contrast, we analyze cell interactions by quantifying the intensities of protein expressions extracted from high-resolution 3D multi-channel volume data. Such analyses have a strong exploratory nature and require a tight integration of domain experts in the analysis loop to leverage their deep knowledge. We propose two complementary semi-automated approaches to cope with the increasing size and complexity of the data in an interactive fashion: On the one hand, we interpret cell-to-cell interactions as edges in a cell graph and analyze the image signal (protein expressions) along those edges, using spatial as well as abstract data visualizations. Complementary, we propose a cell-centered approach, enabling scientists to visually analyze polarized distributions of proteins in three dimensions, which also captures neighboring cells with biochemical and cell biological consequences. We evaluate our application in two case studies, where computational biologists and medical experts use \\tool to investigate tumor micro-environments to identify and quantify T-cell activation in human tissue data. We confirmed that our tool can fully solve both use cases and enables a streamlined and detailed analysis of cell-cell interactions.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"eric.moerth@gmx.at","is_corresponding":true,"name":"Eric M\u00f6rth"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"kevin.sidak@univie.ac.at","is_corresponding":false,"name":"Kevin Sidak"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"zoltan_maliga@hms.harvard.edu","is_corresponding":false,"name":"Zoltan Maliga"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"torsten.moeller@univie.ac.at","is_corresponding":false,"name":"Torsten M\u00f6ller"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"peter_sorger@hms.harvard.edu","is_corresponding":false,"name":"Peter Sorger"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"pfister@seas.harvard.edu","is_corresponding":false,"name":"Hanspeter Pfister"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"jbeyer@g.harvard.edu","is_corresponding":false,"name":"Johanna Beyer"},{"affiliations":["New York University, New York, United States","Harvard University, Boston, United States"],"email":"rk4815@nyu.edu","is_corresponding":false,"name":"Robert Kr\u00fcger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Eric M\u00f6rth"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1615","time_end":"","time_stamp":"","time_start":"","title":"Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data","uid":"v-full-1615","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures---often at multiple spatial or semantic scales---across various application domains and requiring diverse visual representations for effective visualization. To understand user reactions to our new environment, we began with an elicitation user study, in which we captured their responses and interactions. We observed that users adapted their interaction approaches based on perceived visual representations, with natural transitions in spatial awareness and actions while navigating across the physical surface. Our findings then informed the development of a design space for spatial data exploration in cross-reality. We thus developed cross-reality environments tailored to three distinct domains: for 3D molecular structure data, for 3D point cloud data, and for 3D anatomical data. In particular, we designed interaction techniques that account for the inherent features of interactions in both spaces, facilitating various forms of interaction including mid-air gestures, touch interactions, pen interactions, and combinations thereof to enhance the users' sense of presence and engagement. We assessed the usability of our environment with biologists, focusing on its use for domain research. In addition, we evaluated our interaction transition designs with virtual and mixed-reality experts to gather further insights. As a result, we provide our design suggestions for the cross-reality environment, emphasizing the interaction with diverse visual representations and seamless interaction transitions between 2D and 3D spaces.","accessible_pdf":false,"authors":[{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"lixiang.zhao17@student.xjtlu.edu.cn","is_corresponding":false,"name":"Lixiang Zhao"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"fuqi.xie20@student.xjtlu.edu.cn","is_corresponding":false,"name":"Fuqi Xie"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"hainingliang@hkust-gz.edu.cn","is_corresponding":false,"name":"Hai-Ning Liang"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"lingyun.yu@xjtlu.edu.cn","is_corresponding":true,"name":"Lingyun Yu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lingyun Yu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1626","time_end":"","time_stamp":"","time_start":"","title":"SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality","uid":"v-full-1626","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"High-dimensional data, characterized by many features, can be difficult to visualize effectively. Dimensionality reduction techniques, such as PCA, UMAP, and t-SNE, address this challenge by projecting the data into a lower-dimensional space while preserving important relationships. TopoMap is another technique that excels at preserving the underlying structure of the data, leading to interpretable visualizations. In particular, TopoMap maps the high-dimensional data into a visual space, guaranteeing that the 0-dimensional persistence diagram of the Rips filtration of the visual space matches the one from the high-dimensional data. However, the original Topomap algorithm can be slow and its layout can be too sparse for large and complex datasets. In this paper, we propose three improvements to TopoMap: 1) a more space-efficient layout, 2) a significantly faster implementation, and 3) a novel treemap-based representation to aid the exploration of the projections. These advancements make TopoMap, now referred to as TopoMap++, a more powerful tool for visualizing high-dimensional data, similar to how t-SNE surpassed SNE in popularity.","accessible_pdf":false,"authors":[{"affiliations":["New York University, New York City, United States"],"email":"vitoriaguardieiro@gmail.com","is_corresponding":true,"name":"Vitoria Guardieiro"},{"affiliations":["New York University, New York City, United States"],"email":"felipedeoliveira1407@gmail.com","is_corresponding":false,"name":"Felipe Inagaki de Oliveira"},{"affiliations":["Microsoft Research India, Bangalore, India"],"email":"harish.doraiswamy@microsoft.com","is_corresponding":false,"name":"Harish Doraiswamy"},{"affiliations":["University of Sao Paulo, Sao Carlos, Brazil"],"email":"gnonato@icmc.usp.br","is_corresponding":false,"name":"Luis Gustavo Nonato"},{"affiliations":["New York University, New York City, United States"],"email":"csilva@nyu.edu","is_corresponding":false,"name":"Claudio Silva"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Vitoria Guardieiro"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1632","time_end":"","time_stamp":"","time_start":"","title":"TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees","uid":"v-full-1632","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Probability density function (PDF) curves are among the few charts on a Cartesian coordinate system that are commonly presented without y-axes. This design decision may be due to the lack of relevance of vertical scaling in normal PDFs. In fact, as long as two normal PDFs have the same mean and standard deviations (SDs), they can be scaled to occupy different amounts of vertical space while still remaining statistically identical. Because unscaled PDF height increases as SD decreases, visualization designers may find themselves tempted to vertically shrink low-SD PDFs to avoid occlusion or save white space in their figures. While irregular vertical scaling has been explored in bar and line charts, the visualization community has yet to investigate how this purely visual manipulation may affect reader comparisons of PDFs. In this paper, we present two preregistered quantitative experiments (n=600, n=401) that systematically demonstrate that vertical scaling can lead to misinterpretations of PDFs. We also test visual interventions to mitigate misinterpretation. In some contexts, we find that including a y-axis reduces this effect. Overall, we find that keeping vertical scaling consistent, and therefore maintaining equal pixel areas under PDF curves, results in the highest likelihood of accurate comparisons. Our findings provide the first insights into the impact of vertical scaling on PDFs, and reveal the complicated nature of proportional area comparisons.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"racquel.fygenson@gmail.com","is_corresponding":true,"name":"Racquel Fygenson"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Racquel Fygenson"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1638","time_end":"","time_stamp":"","time_start":"","title":"The Impact of Vertical Scaling on Normal Probability Density Function Plots","uid":"v-full-1638","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Despite the development of numerous visual analytics tools for event sequence data across various domains, including, but not limited to healthcare, digital marketing, and user behavior analysis, comparing these domain-specific investigations and transferring the results to new datasets and problem areas remain challenging. Task abstractions can help us go beyond domain-specific details, but existing visualization task abstractions are insufficient for event sequence visual analytics because they primarily focus on tabular datasets and often overlook automated analytical techniques. To address this gap, we propose a domain-agnostic multi-level task framework for event sequence analysis, derived from an analysis of 58 papers that present event sequence visualization systems. Our framework consists of four levels: objective, intent, strategy, and technique. Overall objectives identify the main goals of analysis. Intents comprises five high-level approaches adopted at each analysis step: augment data, simplify data, configure data, configure visualization, and create provenance. Each intent is accomplished through a number of strategies, for instance, data simplification can be achieved through aggregation, summarization, or segmentation. Finally, each strategy can be implemented by a set of techniques depending on the input and output components. We further show that techniques can be expressed through a quartet of action-input-output-criteria. We demonstrate the framework\u2019s power through mapping case studies and discuss its similarities and differences with previous event sequence task taxonomies.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, College Park, United States"],"email":"kzintas@umd.edu","is_corresponding":true,"name":"Kazi Tasnim Zinat"},{"affiliations":["University of Maryland, College Park, United States"],"email":"ssakhamu@terpmail.umd.edu","is_corresponding":false,"name":"Saimadhav Naga Sakhamuri"},{"affiliations":["University of Maryland, College Park, United States"],"email":"achen151@terpmail.umd.edu","is_corresponding":false,"name":"Aaron Sun Chen"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kazi Tasnim Zinat"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1642","time_end":"","time_stamp":"","time_start":"","title":"A Multi-Level Task Framework for Event Sequence Analysis","uid":"v-full-1642","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In recent years, the global adoption of electric vehicles (EVs) has surged, prompting a corresponding rise in the installation of charging stations. This proliferation has underscored the importance of expediting the deployment of charging infrastructure. Both academia and industry have thus devoted to addressing the charging station location problem (CSLP) to streamline this process. However, prevailing algorithms addressing CSLP are hampered by restrictive assumptions and computational overhead, leading to a dearth of comprehensive evaluations in the spatiotemporal dimensions. Consequently, their practical viability is restricted. Moreover, the placement of charging stations exerts a significant impact on both the road network and the power grid, which necessitates the evaluation of the potential post-deployment impacts on these interconnected networks holistically. In this study, we propose CSLens, a visual analytics system designed to inform charging station deployment decisions through the lens of coupled transportation and power networks. CSLens offers multiple visualizations and interactive features, empowering users to delve into the existing charging station layout, explore alternative deployment solutions, and assess the ensuring impact. To validate the efficacy of CSLens, we conducted two case studies and engaged in interviews with domain experts. Through these efforts, we substantiated the usability and practical utility of CSLens in enhancing the decision-making process surrounding charging station deployment. Our findings underscore CSLens\u2019s potential to serve as a valuable asset in navigating the complexities of charging infrastructure planning.","accessible_pdf":false,"authors":[{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"zhangyt85@mail2.sysu.edu.cn","is_corresponding":false,"name":"Yutian Zhang"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"xulw8@mail2.sysu.edu.cn","is_corresponding":false,"name":"Liwen Xu"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"taoshc@mail2.sysu.edu.cn","is_corresponding":false,"name":"Shaocong Tao"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"guanqx3@mail.sysu.edu.cn","is_corresponding":false,"name":"Quanxue Guan"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"zenghp5@mail.sysu.edu.cn","is_corresponding":true,"name":"Haipeng Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haipeng Zeng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1681","time_end":"","time_stamp":"","time_start":"","title":"CSLens: Towards Better Deploying Charging Stations via Visual Analytics \u2014\u2014 A Coupled Networks Perspective","uid":"v-full-1681","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We introduce a visual analysis method for multiple causality graphs with different outcome variables, namely, multi-outcome causality graphs. Multi-outcome causality graphs are important in healthcare for understanding multimorbidity and comorbidity. To support the visual analysis, we collaborated with medical experts to devise two comparative visualization techniques at different stages of the analysis process. First, a progressive visualization method is proposed for comparing multiple state-of-the-art causal discovery algorithms. The method can handle mixed-type datasets comprising both continuous and categorical variables and assist in the creation of a fine-tuned causality graph of a single outcome. Second, a comparative graph layout technique and specialized visual encodings are devised for the quick comparison of multiple causality graphs. In our visual analysis approach, analysts start by building individual causality graphs for each outcome variable, and then, multi-outcome causality graphs are generated and visualized with our comparative technique for analyzing differences and commonalities of these causality graphs. Evaluation includes quantitative measurements on benchmark datasets, a case study with a medical expert, and expert user studies with real-world health research data.","accessible_pdf":false,"authors":[{"affiliations":["Institute of Medical Technology, Peking University Health Science Center, Beijing, China","National Institute of Health Data Science, Peking University, Beijing, China"],"email":"mengjiefan@bjmu.edu.cn","is_corresponding":true,"name":"Mengjie Fan"},{"affiliations":["Beihang University, Beijing, China","Peking University, Beijing, China"],"email":"yu.jinlu@qq.com","is_corresponding":false,"name":"Jinlu Yu"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"},{"affiliations":["Tongji College of Design and Innovation, Shanghai, China"],"email":"nan.cao@gmail.com","is_corresponding":false,"name":"Nan Cao"},{"affiliations":["Beijing University of Chinese Medicine, Beijing, China"],"email":"wanghuaiyuelva@126.com","is_corresponding":false,"name":"Huaiyu Wang"},{"affiliations":["Peking University, Beijing, China"],"email":"zhoulng@pku.edu.cn","is_corresponding":false,"name":"Liang Zhou"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mengjie Fan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1693","time_end":"","time_stamp":"","time_start":"","title":"Visual Analysis of Multi-outcome Causal Graphs","uid":"v-full-1693","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Room-scale immersive data visualisations provide viewers a wide-scale overview of a large dataset, but to interact precisely with individual data points they typically have to navigate to change their point of view. In traditional screen-based visualisations, focus-and-context techniques allow visualisation users to keep a full dataset in view while making detailed selections. Such techniques have been studied extensively on desktop to allow precise selection within large data sets, but they have not been explored in immersive 3D modalities. In this paper we develop a novel immersive focus-and-context technique based on a ``magic portal'' metaphor adapted specifically for data visualisation scenarios. An extendable-hand interaction technique is used to place a portal close to the region of interest. The other end of the portal then opens comfortably within the user's physical reach such that they can reach through to precisely select individual data points. Through a controlled study with 24 participants, we find strong evidence that portals reduce overshoots in selection and overall hand trajectory length, reducing arm fatigue compared to ranged interaction without the portal. The portals also enable us to use a robot arm to provide haptic feedback for data within the limited volume of the portal region. We demonstrate applications for portal-based selection through two use-case scenarios.","accessible_pdf":false,"authors":[{"affiliations":["Monash University, Melbourne, Australia"],"email":"dai.shaozhang@gmail.com","is_corresponding":true,"name":"Shaozhang Dai"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"yi.li5@monash.edu","is_corresponding":false,"name":"Yi Li"},{"affiliations":["The University of British Columbia (Okanagan Campus), Kelowna, Canada"],"email":"barrett.ens@ubc.ca","is_corresponding":false,"name":"Barrett Ens"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":false,"name":"Lonni Besan\u00e7on"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"tgdwyer@gmail.com","is_corresponding":false,"name":"Tim Dwyer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shaozhang Dai"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1699","time_end":"","time_stamp":"","time_start":"","title":"Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context","uid":"v-full-1699","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Contour trees describe the topology of level sets in scalar fields and are widely used in topological data analysis and visualization. A main challenge for utilizing contour trees for large-scale scientific data is their computation at scale using high-performance computing. To address this challenge, recent work has introduced distributed hierarchical contour trees for distributed computation and storage of contour trees. However, effective use of these distributed structures in analysis and visualization requires subsequent computation of geometric properties and branch decomposition to support contour extraction and exploration. In this work, we introduce distributed algorithms for augmentation, hypersweeps, and branch decomposition that enable parallel computation of geometric properties, and support the use of distributed contour trees as a query structure for scientific exploration. We evaluate the parallel performance of these algorithms and apply them to identify and extract important contours for scientific visualization.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"mingzhefluorite@gmail.com","is_corresponding":true,"name":"Mingzhe Li"},{"affiliations":["University of Leeds, Leeds, United Kingdom"],"email":"h.carr@leeds.ac.uk","is_corresponding":false,"name":"Hamish Carr"},{"affiliations":["Lawrence Berkeley National Laboratory, Berkeley, United States"],"email":"oruebel@lbl.gov","is_corresponding":false,"name":"Oliver R\u00fcbel"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"},{"affiliations":["Lawrence Berkeley National Laboratory, Berkeley, United States"],"email":"ghweber@lbl.gov","is_corresponding":false,"name":"Gunther H Weber"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mingzhe Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1705","time_end":"","time_stamp":"","time_start":"","title":"Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration","uid":"v-full-1705","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The widespread use of Deep Neural Networks (DNNs) has recently resulted in their application to challenging scientific visualization tasks. While advanced DNNs demonstrate impressive generalization abilities, understanding factors like prediction quality, confidence, robustness, and uncertainty is crucial. These insights aid application scientists in making informed decisions. However, DNNs lack inherent mechanisms to measure prediction uncertainty, prompting the creation of distinct frameworks for constructing robust uncertainty-aware models tailored to various visualization tasks. In this work, we develop uncertainty-aware implicit neural representations to model steady-state vector fields effectively. We comprehensively evaluate the efficacy of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout, aimed at enabling uncertainty-informed visual analysis of features within steady vector field data. Our detailed exploration using several vector data sets indicate that uncertainty-aware models generate informative visualization results of vector field features. Furthermore, incorporating prediction uncertainty improves the resilience and interpretability of our DNN model, rendering it applicable for the analysis of complex vector field data sets.","accessible_pdf":false,"authors":[{"affiliations":["Indian Institute of Technology Kanpur , Kanpur, India"],"email":"atulkrfcb@gmail.com","is_corresponding":false,"name":"Atul Kumar"},{"affiliations":["Indian Institute of Technology Kanpur , Kanpur , India"],"email":"gsiddharth2209@gmail.com","is_corresponding":false,"name":"Siddharth Garg"},{"affiliations":["Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India"],"email":"soumya.cvpr@gmail.com","is_corresponding":true,"name":"Soumya Dutta"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Soumya Dutta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1708","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data","uid":"v-full-1708","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"User experience in data visualization is typically assessed through post-viewing self-reports, but these overlook the dynamic cognitive processes during interaction. This study explores the use of mind wandering as a dynamic measure during visualization exploration. Participants reported mind wandering while viewing visualizations from a pre-labeled visualization database and then provided quantitative ratings of trust, engagement, and design quality, along with qualitative descriptions and short-term/long-term recall assessments. Results show that mind wandering negatively affects short-term visualization recall and various post-viewing measures, particularly for visualizations with little text annotation. Further, the type of mind wandering impacts engagement and emotional response. Mind wandering also acts as a serial mediator between visualization design elements and post-viewing measures. Overall, this research underscores the importance of incorporating mind wandering as a dynamic measure in visualization design and evaluation, offering novel avenues for enhancing user engagement and comprehension.","accessible_pdf":false,"authors":[{"affiliations":["Arizona State University, Tempe, United States"],"email":"aarunku5@asu.edu","is_corresponding":true,"name":"Anjana Arunkumar"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"},{"affiliations":["Arizona State University, Tempe, United States"],"email":"cbryan16@asu.edu","is_corresponding":false,"name":"Chris Bryan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anjana Arunkumar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1726","time_end":"","time_stamp":"","time_start":"","title":"Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations","uid":"v-full-1726","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Understanding the input and output of data wrangling scripts is crucial for various tasks like debugging codes and onboarding new data. However, existing research on script understanding primarily focuses on revealing the process of data transformations, lacking the ability to analyze the potential scope, i.e., the space of script inputs and outputs. Meanwhile, constructing input/output space during script analysis is challenging, as the wrangling scripts could be semantically complex and diverse, and the association between different data objects is intricate. To facilitate data workers in understanding the input and output spaces of wrangling scripts, we summarize ten types of constraints to express table spaces, and build a mapping between data transformations and these constraints to guide the construction of the input/output for individual transformations. Then, we propose a constraint generation model for integrating table constraints across multiple transformations. Based on the model, we develop Ferry, an interactive system that extracts and visualizes the data constraints describing the input and output spaces of data wrangling scripts, thereby enabling users to grasp the high-level semantics of complex scripts and locate the origins of faulty data transformations. Besides, Ferry provides example input and output data to assist users in interpreting the extracted constraints, checking and resolving the conflicts between these constraints and any uploaded dataset. Ferry's effectiveness and usability are evaluated via a usage scenario and two case studies: the first assists users in onboarding new data and debugging scripts, while the second verifies input-output compatibility across data processing modules. Furthermore, an illustrative application is presented to demonstrate Ferry's flexibility.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"rickyluozs@gmail.com","is_corresponding":true,"name":"Zhongsu Luo"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"kaixiong@zju.edu.cn","is_corresponding":false,"name":"Kai Xiong"},{"affiliations":["Zhejiang University, Hangzhou,Zhejiang, China"],"email":"3220105578@zju.edu.cn","is_corresponding":false,"name":"Jiajun Zhu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"chenran928@zju.edu.cn","is_corresponding":false,"name":"Ran Chen"},{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":["Zhejiang University, Ningbo, China"],"email":"dweng@zju.edu.cn","is_corresponding":false,"name":"Di Weng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zhongsu Luo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1730","time_end":"","time_stamp":"","time_start":"","title":"Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts","uid":"v-full-1730","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"As a step towards improving visualization literacy, we investigated how students approach reading visualizations differently after taking a university-level visualization course. We asked students to verbally walk through their process of making sense of unfamiliar visualizations, and conducted a qualitative analysis of these walkthroughs. Our qualitative analysis found changes in students' walkthroughs consistent with explicit learning goals of visualization courses. After taking a visualization course, students also engaged with visualizations in more sophisticated ways not fully captured by explicit learning goals: they were more likely to exhibit design empathy by thinking critically about the tradeoffs behind why a chart was designed in a particular way, and were better able to deconstruct a chart to make sense of it. We also gave students a quantitative assessment of visualization literacy and found no evidence of scores improving after the class, likely because the test we used focused on a different set of skills than those emphasized in visualization classes. While current measurement instruments for visualization literacy are useful, we propose developing standardized assessments for additional aspects of visualization literacy, such as deconstruction and design empathy. We also suggest those additional aspects could be made more explicit in learning goals set by visualization educators. All supplemental materials are available at https://osf.io/w5pum/?view_only=f9eca3fa4711425582d454031b9c482e.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"maryam.hedayati@u.northwestern.edu","is_corresponding":true,"name":"Maryam Hedayati"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maryam Hedayati"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1738","time_end":"","time_stamp":"","time_start":"","title":"What University Students Learn In Visualization Classes","uid":"v-full-1738","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Hypergraphs provide a natural way to represent polyadic relationships in network data. For large hypergraphs, it is often difficult to visually detect structures within the data. Recently, a scalable polygon-based visualization framework was developed allowing hypergraphs with thousands of hyperedges to be simplified and examined at different levels of detail. However, this approach does not consider structures such as cycles, bridges, and branches. Consequently, structures can be lost at simplified scales, making interpretations for real-world applications unreliable. In this paper, we define hypergraph structures using the bipartite graph representation. Powered by our analysis, we provide an algorithm to decompose large hypergraphs into meaningful features and to identify regions of non-planarity. We also introduce a set of topology preserving and topology altering atomic operations, enabling the preservation of important structures while removing topological noise in simplified scales. We demonstrate our approach in several real-world applications.","accessible_pdf":false,"authors":[{"affiliations":["Oregon State University, Corvallis, United States"],"email":"oliverpe@oregonstate.edu","is_corresponding":false,"name":"Peter D Oliver"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhange@eecs.oregonstate.edu","is_corresponding":true,"name":"Eugene Zhang"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhangyue@oregonstate.edu","is_corresponding":false,"name":"Yue Zhang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Eugene Zhang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1746","time_end":"","time_stamp":"","time_start":"","title":"Structure-Aware Simplification for Hypergraph Visualization","uid":"v-full-1746","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The semantic similarity between documents of a text corpus can be visualized using map-like metaphors based on two-dimensional scatterplot layouts. These layouts result from a dimensionality reduction on the document-term matrix or a representation within a latent embedding, including topic models. Thereby, the resulting layout depends on the input data and hyperparameters of the dimensionality reduction and is therefore affected by changes in them. Furthermore, the resulting layout is affected by changes in the input data and hyperparameters of the dimensionality reduction. However, such changes to the layout require additional cognitive efforts from the user. In this work, we present a sensitivity study that analyzes the stability of these layouts concerning (1) changes in the text corpora, (2) changes in the hyperparameter, and (3) randomness in the initialization. Our approach has two stages: data measurement and data analysis. First, we derived layouts for the combination of three text corpora and six text embeddings and a grid-search-inspired hyperparameter selection of the dimensionality reductions. Afterward, we quantified the similarity of the layouts through ten metrics, concerning local and global structures and class separation. Second, we analyzed the resulting 42817 tabular data points in a descriptive statistical analysis. From this, we derived guidelines for informed decisions on the layout algorithm and highlight specific hyperparameter settings. We provide our implementation and results as a Git repository at https://github.com/hpicgs/Topic-Models-and-Dimensionality-Reduction-Sensitivity-Study .","accessible_pdf":false,"authors":[{"affiliations":["University of Potsdam, Digital Engineering Faculty, Hasso Plattner Institute, Potsdam, Germany"],"email":"daniel.atzberger@hpi.de","is_corresponding":true,"name":"Daniel Atzberger"},{"affiliations":["University of Potsdam, Potsdam, Germany"],"email":"tcech@uni-potsdam.de","is_corresponding":false,"name":"Tim Cech"},{"affiliations":["Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany"],"email":"willy.scheibel@hpi.de","is_corresponding":false,"name":"Willy Scheibel"},{"affiliations":["Hasso Plattner Institute"],"email":"juergen.doellner@hpi.de","is_corresponding":false,"name":"J\u00fcrgen D\u00f6llner"},{"affiliations":["Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany"],"email":"m.behrisch@uu.nl","is_corresponding":false,"name":"Michael Behrisch"},{"affiliations":["Utrecht University, Utrecht, Netherlands"],"email":"tobias.schreck@cgv.tugraz.at","is_corresponding":false,"name":"Tobias Schreck"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Atzberger"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1770","time_end":"","time_stamp":"","time_start":"","title":"A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations","uid":"v-full-1770","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale complexes, and the discrepancies in topology between original and decompressed datasets could potentially result in erroneous interpretations or even incorrect scientific conclusions. In this paper, we focus on preserving Morse-Smale segmentations in 2D/3D piecewise linear scalar fields, targeting the precise reconstruction of minimum/maximum labels induced by the integral curve of each vertex. The key is to derive a series of edits during compression time; the edits are applied to the decompressed data, leading to an accurate reconstruction of segmentations while keeping the error within the prescribed error bound. To this end, we developed a workflow to fix extrema and integral curves alternatively until convergence within finite iterations; we accelerate each workflow component with shared-memory/GPU parallelism to make the performance practical for coupling with compressors. We demonstrate use cases with fluid dynamics, ocean, and cosmology application datasets with a 1000x acceleration with an NVIDIA A100 GPU.","accessible_pdf":false,"authors":[{"affiliations":["The Ohio State University, Columbus, United States"],"email":"li.14025@osu.edu","is_corresponding":true,"name":"Yuxiao Li"},{"affiliations":["University of California, Riverside, Riverside, United States"],"email":"xlian007@ucr.edu","is_corresponding":false,"name":"Xin Liang"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"qiu.722@osu.edu","is_corresponding":false,"name":"Yongfeng Qiu"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"lyan@anl.gov","is_corresponding":false,"name":"Lin Yan"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuxiao Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1793","time_end":"","time_stamp":"","time_start":"","title":"MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors","uid":"v-full-1793","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In the biomedical domain, visualizing the document embeddings of an extensive corpus has been widely used in information-seeking tasks. However, three key challenges with existing visualizations make it difficult for clinicians to find information efficiently. First, the document embeddings used in these visualizations are generated statically by pretrained language models, which cannot adapt to the user's evolving interest. Second, existing document visualization techniques cannot effectively display how the documents are relevant to users\u2019 interest, making it difficult for users to identify the most pertinent information. Third, existing embedding generation and visualization processes suffer from a lack of interpretability, making it difficult to understand, trust and use the result for decision-making. In this paper, we present a novel visual analytics pipeline for user-driven document representation and iterative information seeking (VADIS). VADIS introduces a prompt-based attention model (PAM) that generates dynamic document embedding and document relevance adjusted to the user's query. To effectively visualize these two pieces of information, we design a new document map that leverages a circular grid layout to display documents based on both their relevance to the query and the semantic similarity. Additionally, to improve the interpretability, we introduce a corpus-level attention visualization method to improve the user's understanding of the model focus and to enable the users to identify potential oversight. This visualization, in turn, empowers users to refine, update and introduce new queries, thereby facilitating a dynamic and iterative information-seeking experience. We evaluated VADIS quantitatively and qualitatively on a real-world dataset of biomedical research papers to demonstrate its effectiveness.","accessible_pdf":false,"authors":[{"affiliations":["Ohio State University, Columbus, United States"],"email":"qiu.580@buckeyemail.osu.edu","is_corresponding":true,"name":"Rui Qiu"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"tu.253@osu.edu","is_corresponding":false,"name":"Yamei Tu"},{"affiliations":["Washington University School of Medicine in St. Louis, St. Louis, United States"],"email":"yenp@wustl.edu","is_corresponding":false,"name":"Po-Yin Yen"},{"affiliations":["The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Rui Qiu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1802","time_end":"","time_stamp":"","time_start":"","title":"VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking","uid":"v-full-1802","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Scalar field comparison is a fundamental task in scientific visualization. In topological data analysis, we compare topological descriptors of scalar fields---such as persistence diagrams and merge trees---as they provide succinct and robust abstract representations. While several similarity measures for topological descriptors seem to be both asymptotically and practically efficient with polynomial time algorithms, they do not scale well when handling large-scale, time-varying scientific data and ensembles. In this paper, we propose a new framework to facilitate the comparative analysis of merge trees, inspired by tools from locality sensitive hashing (LSH). LSH hashes similar objects into the same hash buckets with high probability. We propose two new similarity measures for merge trees that can be computed via LSH, using new extensions to Recursive MinHash and subpath signature, respectively. Our similarity measures are extremely efficient to compute and closely resemble the results of existing measures such as merge tree edit distance or geometric interleaving distance. Our experiments demonstrate the utility of our LSH framework in applications such as shape matching, clustering, key event detection, and ensemble summarization.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, SALT LAKE CITY, United States"],"email":"lyuweiran@gmail.com","is_corresponding":false,"name":"Weiran Lyu"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"g.s.raghavendra@gmail.com","is_corresponding":true,"name":"Raghavendra Sridharamurthy"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jeffp@cs.utah.edu","is_corresponding":false,"name":"Jeff M. Phillips"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Raghavendra Sridharamurthy"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1803","time_end":"","time_stamp":"","time_start":"","title":"Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing","uid":"v-full-1803","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"he optimization of cooling systems is important in many cases, for example for cabin and battery cooling in electric cars. Such an optimization is governed by multiple, conflicting objectives and it is performed across a multi-dimensional parameter space. The extent of the parameter space, the complexity of the non-linear model of the system, as well as the time needed per simulation run and factors that are not modeled in the simulation necessitate an iterative, semi-automatic approach. We present an interactive visual optimization approach, where the user works with a p-h diagram to steer an iterative, guided optimization process. A deep learning (DL) model provides estimates for parameters, given a target characterization of the system, while numerical simulation is used to predict system characteristics for an ensemble of parameter sets. Since the DL model only serves as an approximation of the inverse of the cooling system and since target characteristics can be chosen according to different, competing objectives, an iterative optimization process is realized, developing multiple sets of intermediate solutions, which are visually related to each other. The standard p-h diagram, integrated interactively in this approach, is complemented by a dual, also interactive visual representation of additional expressive measures representing the system characteristics. We show how the known four-points semantic of the p-h diagram meaningfully transfers to the dual data representation. When evaluating this approach with our partners in the automotive domain, we found that our solution helped with the overall comprehension of the cooling system and that it lead to a faster convergence during optimization.","accessible_pdf":false,"authors":[{"affiliations":["VRVis Research Center, Vienna, Austria"],"email":"splechtna@vrvis.at","is_corresponding":false,"name":"Rainer Splechtna"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"behravan@vt.edu","is_corresponding":false,"name":"Majid Behravan"},{"affiliations":["AVL AST doo, Zagreb, Croatia"],"email":"mario.jelovic@avl.com","is_corresponding":false,"name":"Mario Jelovic"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"gracanin@vt.edu","is_corresponding":false,"name":"Denis Gracanin"},{"affiliations":["University of Bergen, Bergen, Norway"],"email":"helwig.hauser@uib.no","is_corresponding":false,"name":"Helwig Hauser"},{"affiliations":["VRVis Research Center, Vienna, Austria"],"email":"matkovic@vrvis.at","is_corresponding":true,"name":"Kresimir Matkovic"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kresimir Matkovic"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1805","time_end":"","time_stamp":"","time_start":"","title":"Interactive Design-of-Experiments: Optimizing a Cooling System","uid":"v-full-1805","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualizing relational data is crucial for understanding complex connections between entities in social networks, political affiliations, or biological interactions. Well-known representations like node-link diagrams and adjacency matrices offer valuable insights, but their effectiveness relies on the ability to identify patterns in the underlying topological structure. Reordering strategies and layout algorithms play a vital role in the visualization process since the arrangement of nodes, edges, or cells influences the visibility of these patterns. The BioFabric visualization combines elements of node-link diagrams and adjacency matrices, leveraging the strengths of both, the visual clarity of node-link diagrams and the tabular organization of adjacency matrices. A unique characteristic of BioFabric is the possibility to reorder nodes and edges separately. This raises the question of which combination of layout algorithms best reveals certain patterns. In this paper, we discuss patterns and anti-patterns in BioFabric, such as staircases or escalators, relate them to already established patterns, and propose metrics to evaluate their quality. Based on these quality metrics, we compared combinations of well-established reordering techniques applied to BioFabric with a well-known benchmark data set. Our experiments indicate that the edge order has a stronger influence on revealing patterns than the node layout. The results show that the best combination for revealing staircases is a barycentric node layout, together with an edge order based on node indices and length. Our research contributes a first building block for many promising future research directions, which we also share and discuss. A free copy of this paper and all supplemental materials are available at OSF.","accessible_pdf":false,"authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"fuchs@dbvis.inf.uni-konstanz.de","is_corresponding":true,"name":"Johannes Fuchs"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"alexander.frings@uni-konstanz.de","is_corresponding":false,"name":"Alexander Frings"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"maria-viktoria.heinle@uni-konstanz.de","is_corresponding":false,"name":"Maria-Viktoria Heinle"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"sara.di-bartolomeo@uni-konstanz.de","is_corresponding":false,"name":"Sara Di Bartolomeo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Johannes Fuchs"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1809","time_end":"","time_stamp":"","time_start":"","title":"Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations","uid":"v-full-1809","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Classical bibliography, by scrutinizing preserved catalogs from both official archives and personal collections of accumulated books, examines the books throughout history, thereby elucidating cultural development across historical periods. In this work, we collaborate with domain experts to accomplish the task of data annotation concerning Chinese ancient catalogs. We introduce the CataAnno system that facilitates users in completing annotations more efficiently through cross-linked views, recommendation methods and convenient annotation interactions. The recommendation method can learn the background knowledge and annotation patterns that experts subconsciously integrate into the data during prior annotation processes. CataAnno searches for the most relevant examples previously annotated and recommends to the user. Meanwhile, the cross-linked views assist users in comprehending the correlations between entries and offer explanations for these recommendations. Evaluation and expert feedback confirm that the CataAnno system, by offering high-quality recommendations and visualizing the relationships between entries, can mitigate the necessity for specialized knowledge during the annotation process. This results in enhanced accuracy and consistency in annotations, thereby enhancing the overall efficiency","accessible_pdf":false,"authors":[{"affiliations":["Peking University, Beijing, China"],"email":"hanning.shao@pku.edu.cn","is_corresponding":true,"name":"Hanning Shao"},{"affiliations":["Peking University, Beijing, China"],"email":"xiaoru.yuan@pku.edu.cn","is_corresponding":false,"name":"Xiaoru Yuan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hanning Shao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1810","time_end":"","time_stamp":"","time_start":"","title":"CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation","uid":"v-full-1810","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Over the past decade, several urban visual analytics systems have been proposed to tackle a host of challenges faced by cities, in areas as diverse as transportation, weather, and real estate. Many of these systems have been designed through engagement with urban experts, aiming to distill intricate urban analysis workflows into interactive visualizations and interfaces. The design, implementation, and practical use of these systems, however, still rely on siloed approaches that lead to bespoke tools that are hard to reproduce and extend. At the design level, these systems undervalue rich data workflows from urban experts by usually only treating them as data providers and evaluators. At the implementation level, these systems lack interoperability with other technical frameworks. At the practical use level, these systems tend to be narrowly focused on specific fields, inadvertently creating barriers for cross-domain collaboration. To tackle these gaps, we present Curio, a framework for collaborative urban visual analytics. Curio uses a dataflow model with multiple abstraction levels (code, grammar, GUI elements) to facilitate collaboration across the design and implementation of visual analytics components. The framework allows experts to intertwine preprocessing, managing, and visualization stages while tracking provenance of code and visualizations. In collaboration with urban experts, we evaluate Curio through a diverse series of use cases targeting urban accessibility, urban microclimate, and sunlight access. These cases use different types of urban data and domain methodologies to illustrate Curio's flexibility in tackling pressing societal challenges.","accessible_pdf":false,"authors":[{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"gmorei3@uic.edu","is_corresponding":false,"name":"Gustavo Moreira"},{"affiliations":["Massachusetts Institute of Technology , Somerville, United States"],"email":"maryamh@mit.edu","is_corresponding":false,"name":"Maryam Hosseini"},{"affiliations":["University of Illinois Urbana-Champaign, Urbana-Champaign, United States"],"email":"carolinavfs@id.uff.br","is_corresponding":false,"name":"Carolina Veiga Ferreira de Souza"},{"affiliations":["Universidade Federal Fluminense, Niteroi, Brazil"],"email":"lucasalexandre.s.cc@gmail.com","is_corresponding":false,"name":"Lucas Alexandre"},{"affiliations":["Politecnico di Milano, Milano, Italy"],"email":"nicola.colaninno@polimi.it","is_corresponding":false,"name":"Nicola Colaninno"},{"affiliations":["Universidade Federal Fluminense, Niter\u00f3i, Brazil"],"email":"danielcmo@ic.uff.br","is_corresponding":false,"name":"Daniel de Oliveira"},{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"nivan@cin.ufpe.br","is_corresponding":false,"name":"Nivan Ferreira"},{"affiliations":["Universidade Federal Fluminense , Niteroi, Brazil"],"email":"mlage@ic.uff.br","is_corresponding":false,"name":"Marcos Lage"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"fabiom@uic.edu","is_corresponding":true,"name":"Fabio Miranda"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fabio Miranda"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1830","time_end":"","time_stamp":"","time_start":"","title":"Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics","uid":"v-full-1830","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"When using exploratory visual analysis to examine multivariate hierarchical data, users often need to query data to narrow down the scope of analysis. However, formulating effective query expressions remains a challenge for multivariate hierarchical data, particularly when datasets become very large. To address this issue, we develop a declarative grammar, HiRegEx (Hierarchical data Regular Expression), for querying and exploring multivariate hierarchical data. Rooted in the extended multi-level task topology framework for tree visualizations (e-MLTT), HiRegEx delineates three query targets (node, path, and subtree) and two aspects for querying these targets (features and positions), and uses operators developed based on classical regular expressions for query construction. We develop a prototype system, TreeQueryER, to integrate an exploratory framework for querying and exploring multivariate hierarchical data based on HiRegEx. The exploratory framework includes three major components: top-down pattern specification, bottom-up data-driven inquiry, and context-creation data overview. We validate the expressiveness of HiRegEx with the tasks from the e-MLTT framework and showcase its utility and effectiveness through a usage scenario involving expert users in the analysis of a citation tree dataset.","accessible_pdf":false,"authors":[{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"guozhg.li@gmail.com","is_corresponding":true,"name":"Guozheng Li"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"haotian.mi1@gmail.com","is_corresponding":false,"name":"haotian mi"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"liuchi02@gmail.com","is_corresponding":false,"name":"Chi Harold Liu"},{"affiliations":["Ochanomizu University, Tokyo, Japan"],"email":"itot@is.ocha.ac.jp","is_corresponding":false,"name":"Takayuki Itoh"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"wanggrbit@126.com","is_corresponding":false,"name":"Guoren Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Guozheng Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1831","time_end":"","time_stamp":"","time_start":"","title":"HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data","uid":"v-full-1831","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The concept of an intelligent augmented reality (AR) assistant has applications as significant as they are wide-ranging, with potential uses in medicine, military endeavors, and mechanics. Such an assistant must be able to perceive the performer\u2019s environment and actions, reason about the state of the environment in relation to a given task, and seamlessly interact with the performer. These interactions typically involve an AR headset equipped with a variety of sensors which capture video, audio, and haptic feedback. Previous works have sought to facilitate the development of such an assistant by visualizing these sensor data streams as well as the machine learning model outputs that support an assistant\u2019s perception and reasoning capabilities. However, existing visual analytics systems do not include biometric data or focus on user modeling, and are only capable of visualizing a single task session for a single performer at a time. Furthermore, they mainly focus on traditional task analysis that typically assumes a linear progression from one step to the next. We propose a visual analytics system that allows users to compare performance during multiple task sessions focusing on non-linear tasks where different paths or sequences can lead to the successful completion of the task. In particular, we design visualizations for understanding user behavior through functional near-infrared spectroscopy (fNIRS) data as a proxy for perception, attention, and memory as well as corresponding motion data (acceleration, angular velocity, and eye gaze). We distill these insights into visual embeddings that allow users to easily select groups of sessions with similar behaviors. We provide case studies that explore how insights into task performance can be gleaned from these visualizations using data collected during helicopter copilot training tasks. Finally, we evaluate our approach by conducting an in-depth examination of a think-aloud experiment with five domain experts.","accessible_pdf":false,"authors":[{"affiliations":["New York University, New York, United States"],"email":"s.castelo@nyu.edu","is_corresponding":true,"name":"Sonia Castelo Quispe"},{"affiliations":["New York University, New York, United States"],"email":"jlrulff@gmail.com","is_corresponding":false,"name":"Jo\u00e3o Rulff"},{"affiliations":["New York University, Brooklyn, United States"],"email":"pss442@nyu.edu","is_corresponding":false,"name":"Parikshit Solunke"},{"affiliations":["New York University, New York, United States"],"email":"erin.mcgowan@nyu.edu","is_corresponding":false,"name":"Erin McGowan"},{"affiliations":["New York University, New York CIty, United States"],"email":"guandewu@nyu.edu","is_corresponding":false,"name":"Guande Wu"},{"affiliations":["New York University, Brooklyn, United States"],"email":"iran@ccrma.stanford.edu","is_corresponding":false,"name":"Iran Roman"},{"affiliations":["New York University, New York, United States"],"email":"rlopez@nyu.edu","is_corresponding":false,"name":"Roque Lopez"},{"affiliations":["New York University, Brooklyn, United States"],"email":"bs3639@nyu.edu","is_corresponding":false,"name":"Bea Steers"},{"affiliations":["New York University, New York, United States"],"email":"qisun@nyu.edu","is_corresponding":false,"name":"Qi Sun"},{"affiliations":["New York University, New York, United States"],"email":"jpbello@nyu.edu","is_corresponding":false,"name":"Juan Pablo Bello"},{"affiliations":["Northrop Grumman Mission Systems, Redondo Beach, United States"],"email":"bradley.feest@ngc.com","is_corresponding":false,"name":"Bradley S Feest"},{"affiliations":["Northrop Grumman, Aurora, United States"],"email":"michael.middleton@ngc.com","is_corresponding":false,"name":"Michael Middleton"},{"affiliations":["Northrop Grumman, Falls Church, United States"],"email":"ryan.mckendrick@ngc.com","is_corresponding":false,"name":"Ryan McKendrick"},{"affiliations":["New York University, New York City, United States"],"email":"csilva@nyu.edu","is_corresponding":false,"name":"Claudio Silva"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sonia Castelo Quispe"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1833","time_end":"","time_stamp":"","time_start":"","title":"HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems","uid":"v-full-1833","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Although shapes can be a finite number compared to colors, they can not be represented by a numerical space, making it difficult to propose a general guideline for shape choices or shed light on the design heuristics of designer-crafted shape palettes. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks -- relative mean judgment tasks, expert choices, and data correlation estimation. Given how complex and tangled results are, rather than relying on conventional features for modeling, we built a model and introduced a corresponding design tool that offers recommendations for shape encodings. The perceptual effectiveness of shapes significantly varies across specific pairs, and certain shapes may enhance perceptual efficiency and accuracy. However, how performance varies does not map well to classical features of shape such as angles, fill, or convex hull. We developed a model based on pairwise relations between shapes measured in our experiments and the number of shapes required to intelligently recommend shape palettes for a given design. This tool provides designers with agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances the understanding of shape perception in visualization contexts and provides practical design guidelines for advanced shape usage in visualization design that optimize perceptual efficiency.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"chint@cs.unc.edu","is_corresponding":true,"name":"Chin Tseng"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":false,"name":"Ghulam Jilani Quadri"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chin Tseng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1836","time_end":"","time_stamp":"","time_start":"","title":"An Empirically Grounded Approach for Designing Shape Palettes","uid":"v-full-1836","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics (IVD) consumables poses a significant threat to patients. Objective data-driven decision making on the severity of contamination is key for reducing risk to patients, while saving time and cost in the quality assessment process. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings of the current process are analysis problems, like weak support in exploring thousands of particle images, associated attributes, and ineffective knowledge externalization for sense-making. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study\u2019s learnings, and a generalizable approach for knowledge externalization. DaedalusData is a visual analytics system that empowers domain experts to explore particle contamination patterns, to label particles in label alphabets, and to externalize knowledge through semi-supervised label-informed data projections. The results of our case study show that DaedalusData supports experts in generating meaningful, comprehensive data overviews. Additionally, our user study evaluation shows high usability of DaedalusData and efficiently supports the labeling of large quantities of particles, and utilizes externalized knowledge to augment the dataset. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalabilty and trade-offs that come with the adoption of this approach in practice.","accessible_pdf":false,"authors":[{"affiliations":["University of Z\u00fcrich, Z\u00fcrich, Switzerland","Roche pRED, Basel, Switzerland"],"email":"alexander.wyss@protonmail.com","is_corresponding":true,"name":"Alexander Wyss"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"gab.morgenshtern@gmail.com","is_corresponding":false,"name":"Gabriela Morgenshtern"},{"affiliations":["Roche Diagnostics International, Rotkreuz, Switzerland"],"email":"a.hirschhuesler@gmail.com","is_corresponding":false,"name":"Amanda Hirsch-H\u00fcsler"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"bernard@ifi.uzh.ch","is_corresponding":false,"name":"J\u00fcrgen Bernard"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alexander Wyss"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1865","time_end":"","time_stamp":"","time_start":"","title":"DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study","uid":"v-full-1865","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Feature grid Scene Representation Networks (SRNs) have been applied to scientific data as compact functional surrogates for analysis and visualization. As SRNs are black-box lossy data representations, assessing the prediction quality is critical for scientific visualization applications to ensure that scientists can trust the information being visualized. Currently, existing architectures do not support inference time reconstruction quality assessment, as voxel-wise errors cannot be evaluated in the absence of ground truth data. By employing uncertain neural network architectures in feature grid SRNs, we obtain prediction variances during inference time to facilitate confidence-aware data reconstruction. Specifically, we propose a parameter-efficient multi-decoder Ensemble SRN (E-SRN) architecture consisting of a shared feature grid with multiple lightweight multi-layer perceptron decoders. E-SRN can generate a set of plausible predictions for a given input coordinate to compute the mean as the ensemble prediction and the variance as a confidence score. The voxel-wise variance can be rendered along with the data to inform the reconstruction quality, or be integrated into uncertainty-aware volume visualization algorithms. To prevent the misalignment between the quantified variance and the prediction quality, we propose a novel variance regularization loss for ensemble learning that promotes the Regularized Ensemble SRN (RE-SRN) to obtain a more reliable variance that correlates closely to the true model error. We comprehensively evaluate the quality of variance quantification and data reconstruction of Monte Carlo Dropout (MCD), Mean Field Variational Inference (MFVI), Deep Ensemble (DE), and Predicting Variance (PV) in comparison with our proposed E-SRN and RE-SRN applied to state-of-the-art feature grid SRNs across diverse scalar field datasets. We demonstrate that RE-SRN attains the most accurate data reconstruction and competitive variance-error correlation among uncertain SRNs under the same neural network parameter budgets. Furthermore, we present an adaptation of uncertainty-aware volume rendering and shed light on the potential of incorporating uncertain predictions in improving the quality of volume rendering for uncertain SRNs. Through ablation studies on the regularization strength and ensemble size, we show that E-SRN and RE-SRN are expected to perform sufficiently well with a default configuration without requiring customized hyperparameter settings for different datasets.","accessible_pdf":false,"authors":[{"affiliations":["The Ohio State University, Columbus, United States"],"email":"xiong.336@osu.edu","is_corresponding":true,"name":"Tianyu Xiong"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"wurster.18@osu.edu","is_corresponding":false,"name":"Skylar Wolfgang Wurster"},{"affiliations":["The Ohio State University, Columbus, United States","Argonne National Laboratory, Lemont, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"tpeterka@mcs.anl.gov","is_corresponding":false,"name":"Tom Peterka"},{"affiliations":["The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tianyu Xiong"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1866","time_end":"","time_stamp":"","time_start":"","title":"Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network","uid":"v-full-1866","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A layered network is an important category of graph in which every node is assigned to a layer and layers are drawn as parallel or radial lines. They are commonly used to display temporal data or hierarchical networks. Previous research has demonstrated that minimizing edge crossings is the most important criterion to consider when looking to improve the readability of such networks. While heuristic approaches exist for crossing minimization, we are interested in optimal approaches to the problem that prioritize human readability over computational scalability. We aim to improve the usefulness and applicability of such optimal methods by understanding and improving their scalability to larger graphs. This paper categorizes and evaluates the state-of-the-art linear programming formulations for exact crossing minimization and describes nine new and existing techniques that could plausibly accelerate the optimization algorithm. Through a computational evaluation, we explore each technique's effect on calculation time and how the techniques assist or inhibit one another, allowing researchers and practitioners to adapt them to the characteristics of their networks. Our best-performing techniques yielded a median improvement of 2.5--17x depending on the solver used, giving us the capability to create optimal layouts faster and for larger networks. We provide an open-source implementation of our methodology in Python, where users can pick which combination of techniques to enable according to their use case. A free copy of this paper and all supplemental materials, datasets used, and source code are available at {https://osf.io/}.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"wilson.conn@northeastern.edu","is_corresponding":true,"name":"Connor Wilson"},{"affiliations":["Northeastern University, Boston, United States"],"email":"eduardopuertac@gmail.com","is_corresponding":false,"name":"Eduardo Puerta"},{"affiliations":["northeastern university, Boston, United States"],"email":"turokhunter@gmail.com","is_corresponding":false,"name":"Tarik Crnovrsanin"},{"affiliations":["University of Konstanz, Konstanz, Germany","Northeastern University, Boston, United States"],"email":"sara.di-bartolomeo@uni-konstanz.de","is_corresponding":false,"name":"Sara Di Bartolomeo"},{"affiliations":["Northeastern University, Boston, United States"],"email":"c.dunne@northeastern.edu","is_corresponding":false,"name":"Cody Dunne"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Connor Wilson"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1874","time_end":"","time_stamp":"","time_start":"","title":"Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings","uid":"v-full-1874","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Merge trees are a valuable tool in scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the merge tree neural networks (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how graph neural networks (GNNs), which emerged as an effective encoder for graphs, can be trained to produce embeddings of merge trees in vector spaces that enable efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100x on the benchmark datasets while maintaining an error rate below 0.1%.","accessible_pdf":false,"authors":[{"affiliations":["Tulane University, New Orleans, United States"],"email":"yqin2@tulane.edu","is_corresponding":true,"name":"Yu Qin"},{"affiliations":["Montana State University, Bozeman, United States"],"email":"brittany.fasy@montana.edu","is_corresponding":false,"name":"Brittany Terese Fasy"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"cwenk@tulane.edu","is_corresponding":false,"name":"Carola Wenk"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"bsumma@tulane.edu","is_corresponding":false,"name":"Brian Summa"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yu Qin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1880","time_end":"","time_stamp":"","time_start":"","title":"Rapid and Precise Topological Comparison with Merge Tree Neural Networks","uid":"v-full-1880","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefits to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifier users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically \u201csee\u201d the chart content. They ascribed these challenges mainly to the magnification-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method significantly improved the usability of charts over both the status quo screen magnifier and a state-of-the-art space compaction-based solution.","accessible_pdf":false,"authors":[{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"yprak001@odu.edu","is_corresponding":true,"name":"Yash Prakash"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"pkhan002@odu.edu","is_corresponding":false,"name":"Pathan Aseef Khan"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"anaya001@odu.edu","is_corresponding":false,"name":"Akshay Kolgar Nayak"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"uksjayarathna@gmail.com","is_corresponding":false,"name":"Sampath Jayarathna"},{"affiliations":["Michigan State University, East Lansing, United States"],"email":"leehaena@msu.edu","is_corresponding":false,"name":"Hae-Na Lee"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"vganjigu@odu.edu","is_corresponding":false,"name":"Vikas Ashok"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yash Prakash"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1917","time_end":"","time_stamp":"","time_start":"","title":"Towards Enhancing Low Vision Usability of Data Charts on Smartphones","uid":"v-full-1917","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"Full Papers","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"v-ismar":{"event":"ISMAR Invited Partnership Presentations","event_description":"","event_prefix":"v-ismar","event_type":"invited","event_url":"","long_name":"ISMAR Invited Partnership Presentations","organizers":[],"sessions":[]},"v-panels":{"event":"VIS Panels","event_description":"","event_prefix":"v-panels","event_type":"panel","event_url":"","long_name":"VIS Panels","organizers":[],"sessions":[]},"v-short":{"event":"VIS Short Papers","event_description":"","event_prefix":"v-short","event_type":"short","event_url":"","long_name":"VIS Short Papers","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"v-short","ff_link":"","session_id":"short0","session_image":"short0.png","time_end":"","time_slots":[{"abstract":"From dirty data to intentional deception, there are many threats to the validity of data-driven decisions. Making use of data, especially new or unfamiliar data, therefore requires a degree of trust or verification. How is this trust established? In this paper, we present the results of a series of interviews with both producers and consumers of data artifacts (outputs of data ecosystems like spreadsheets, charts, and dashboards) aimed at understanding strategies and obstacles to building trust in data. We find a recurring need, but lack of existing standards, for data validation and verification, especially among data consumers. We therefore propose a set of data guards: methods and tools for fostering trust in data artifacts.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"nicole.sultanum@gmail.com","is_corresponding":true,"name":"Nicole Sultanum"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"bromley.denny@gmail.com","is_corresponding":false,"name":"Dennis Bromley"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicole Sultanum"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1040","time_end":"","time_stamp":"","time_start":"","title":"Data Guards: Challenges and Solutions for Fostering Trust in Data","uid":"v-short-1040","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In the rapidly evolving field of deep learning, the traditional methodologies for designing deep learning models predominantly rely on code-based frameworks. While these approaches provide flexibility, they also create a significant barrier to entry for non-experts and obscure the immediate impact of architectural decisions on model performance. In response to this challenge, recent no-code approaches have been developed with the aim of enabling easy model development through graphical interfaces. However, both traditional and no-code methodologies share a common limitation that the inability to predict model outcomes or identify issues without executing the model. To address this limitation, we introduce an intuitive visual feedback-based no-code approach to visualize and analyze deep learning models during the design phase. This approach utilizes dataflow-based visual programming with dynamic visual encoding of model architecture. A user study was conducted with deep learning developers to demonstrate the effectiveness of our approach in enhancing the model design process, improving model understanding, and facilitating a more intuitive development experience. The findings of this study suggest that real-time architectural visualization significantly contributes to more efficient model development and a deeper understanding of model behaviors.","accessible_pdf":false,"authors":[{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of","Korea University, Seoul, Korea, Republic of"],"email":"juny0603@gmail.com","is_corresponding":true,"name":"JunYoung Choi"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of"],"email":"wings159@vience.co.kr","is_corresponding":false,"name":"Sohee Park"},{"affiliations":["Korea University, Seoul, Korea, Republic of"],"email":"hellenkoh@gmail.com","is_corresponding":false,"name":"GaYeon Koh"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of"],"email":"k0seo0330@vience.co.kr","is_corresponding":false,"name":"Youngseo Kim"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of","Korea University, Seoul, Korea, Republic of"],"email":"wkjeong@korea.ac.kr","is_corresponding":false,"name":"Won-Ki Jeong"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["JunYoung Choi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1047","time_end":"","time_stamp":"","time_start":"","title":"Intuitive Design of Deep Learning Models through Visual Feedback","uid":"v-short-1047","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This comparative study evaluates various neural surface reconstruction methods, particularly focusing on their implications for scientific visualization through reconstructing 3D surfaces via multi-view rendering images. We categorize ten methods into neural radiance fields and neural implicit surfaces, uncovering the benefits of leveraging distance functions (i.e., SDFs and UDFs) to enhance the accuracy and smoothness of the reconstructed surfaces. Our findings highlight the efficiency and quality of NeuS2 for reconstructing closed surfaces and identify NeUDF as a promising candidate for reconstructing open surfaces despite some limitations. We further pinpoint directions for future research, including improving detail capture, optimizing UDF computations, and refining surface extraction methods. By sharing our benchmark dataset, we invite researchers to test the performance of their methods, contributing to the advancement of surface reconstruction solutions for scientific visualization.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"syao2@nd.edu","is_corresponding":true,"name":"Siyuan Yao"},{"affiliations":["Wuhan University, Wuhan, China"],"email":"song.wx@whu.edu.cn","is_corresponding":false,"name":"Weixi Song"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Siyuan Yao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1049","time_end":"","time_stamp":"","time_start":"","title":"A Comparative Study of Neural Surface Reconstruction for Scientific Visualization","uid":"v-short-1049","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Direct volume rendering using ray-casting is widely used in practice. By using GPUs and applying acceleration techniques as empty space skipping, high frame rates are possible on modern hardware. This enables performance-critical use-cases such as virtual reality volume rendering. The currently fastest known technique uses volumetric distance maps to skip empty sections of the volume during ray-casting but requires the distance map to be updated per transfer function change. In this paper, we demonstrate a technique for subdividing the volume intensity range into partitions and deriving what we call partitioned distance maps. These can be used to accelerate the distance map computation for a newly changed transfer function by a factor up to 30. This allows the currently fastest known empty space skipping approach to be used while maintaining high frame rates even when the transfer function is changed frequently.","accessible_pdf":false,"authors":[{"affiliations":["University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria"],"email":"michael.rauter@fhwn.ac.at","is_corresponding":true,"name":"Michael Rauter"},{"affiliations":["Medical University of Vienna, Vienna, Austria"],"email":"lukas.a.zimmermann@meduniwien.ac.at","is_corresponding":false,"name":"Lukas Zimmermann PhD"},{"affiliations":["University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria"],"email":"markus.zeilinger@fhwn.ac.at","is_corresponding":false,"name":"Markus Zeilinger PhD"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Rauter"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1054","time_end":"","time_stamp":"","time_start":"","title":"Accelerating Transfer Function Update for Distance Map based Volume Rendering","uid":"v-short-1054","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present FCNR, a fast compressive neural representation for tens of thousands of visualization images under varying viewpoints and timesteps. The existing NeRVI solution, albeit enjoying a high compression rate, incurs slow speeds in encoding and decoding. Built on the recent advances in stereo image compression, FCNR assimilates stereo context modules and joint context transfer modules to compress image pairs. Our solution significantly improves encoding and decoding speed while maintaining high reconstruction quality and satisfying compression rate. To demonstrate its effectiveness, we compare FCNR with state-of-the-art neural compression methods, including E-NeRV, HNeRV, NeRVI, and ECSIC.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"ylu25@nd.edu","is_corresponding":true,"name":"Yunfei Lu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"pgu@nd.edu","is_corresponding":false,"name":"Pengfei Gu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yunfei Lu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1056","time_end":"","time_stamp":"","time_start":"","title":"FCNR: Fast Compressive Neural Representation of Visualization Images","uid":"v-short-1056","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Real-world datasets often consist of quantitative and categorical variables. The analyst needs to focus on either kind separately or both jointly. We proposed a visualization technique tackling these challenges that supports visual cluster and set analysis. In this paper, we investigate how its visualization parameters affect the accuracy and speed of cluster and set analysis tasks in a controlled experiment. Our findings show that, with the proper settings, our visualization can support both task types well. However, we did not find settings suitable for the joint task, which provides opportunities for future research.","accessible_pdf":false,"authors":[{"affiliations":["TU Wien, Vienna, Austria"],"email":"nikolaus.piccolotto@tuwien.ac.at","is_corresponding":true,"name":"Nikolaus Piccolotto"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"mwallinger@ac.tuwien.ac.at","is_corresponding":false,"name":"Markus Wallinger"},{"affiliations":["Institute of Visual Computing and Human-Centered Technology, Vienna, Austria"],"email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"markus.boegl@tuwien.ac.at","is_corresponding":false,"name":"Markus B\u00f6gl"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nikolaus Piccolotto"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1057","time_end":"","time_stamp":"","time_start":"","title":"On Combined Visual Cluster and Set Analysis","uid":"v-short-1057","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Semantic interaction (SI) in Dimension Reduction (DR) of images allows users to incorporate feedback through direct manipulation of the 2D positions of images. Through interaction, users specify a set of pairwise relationships that the DR should aim to capture. Existing methods for images incorporate feedback into the DR through feature weights on abstract embedding features. However, if the original embedding features do not suitably capture the users task then the DR cannot either. We propose, ImageSI, an SI method for image DR that incorporates user feedback directly into the image model to update the underlying embeddings, rather than weighting them. In doing so, ImageSI ensures that the embeddings suitably capture the features necessary for the task so that the DR can subsequently organize images using those features. We present two variations of ImageSI using different loss functions - ImageSI_MDS-Inverse , which prioritizes the explicit pairwise relationships from the interaction and ImageSI_Triplet, which prioritizes clustering, using the interaction to define groups of images. Finally, we present a usage scenario and a simulation-based evaluation to demonstrate the utility of ImageSI and compare it to current methods.","accessible_pdf":false,"authors":[{"affiliations":["Vriginia Tech, Blacksburg, United States"],"email":"jiayuelin@vt.edu","is_corresponding":false,"name":"Jiayue Lin"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":true,"name":"Rebecca Faust"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Rebecca Faust"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1058","time_end":"","time_stamp":"","time_start":"","title":"ImageSI: Semantic Interaction for Deep Learning Image Projections","uid":"v-short-1058","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Gantt charts are a widely-used idiom for visualizing temporal discrete event sequence data where dependencies exist between events. They are popular in domains such as manufacturing and computing for their intuitive layout of such data. However, these domains frequently generate data at scales which tax both the visual representation and the ability to render it at interactive speeds. To aid visualization developers who use Gantt charts in these situations, we develop a task taxonomy of low level visualization tasks supported by Gantt charts and connect them to the data queries needed to support them. Our taxonomy is derived through a systematic literature survey of visualizations using Gantt charts over the past 30 years.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"sayefsakin@sci.utah.edu","is_corresponding":true,"name":"Sayef Azad Sakin"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sayef Azad Sakin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1059","time_end":"","time_stamp":"","time_start":"","title":"A Literature-based Visualization Task Taxonomy for Gantt charts","uid":"v-short-1059","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Annotations are a critical component of visualizations, helping viewers interpret the visual representation and highlighting critical data insights. Despite its significant role, we lack an understanding of how annotations can be incorporated into other data representations, such as physicalizations and sonifications. Given the emergent nature of these representations, sonifications and physicalizations lack formalized conventions (e.g., design space, vocabulary) that can introduce challenges for audiences to interpret the intended data encoding. To address this challenge, this work focuses on how annotations can be more tightly integrated into the design process of creating sonifications and physicalization. In an exploratory study with 13 designers, we explore how visualization annotation techniques can be adapted to sonic and physical modalities. Our work highlights how annotations for sonification and physicalizations are inseparable from their data encodings","accessible_pdf":false,"authors":[{"affiliations":["Whitman College, Walla Walla, United States"],"email":"sorensor@whitman.edu","is_corresponding":false,"name":"Rhys Sorenson-Graff"},{"affiliations":["University of Colorado Boulder, Boulder, United States"],"email":"sandra.bae@colorado.edu","is_corresponding":true,"name":"S. Sandra Bae"},{"affiliations":["Whitman College, Walla Walla, United States"],"email":"wirfsbro@colorado.edu","is_corresponding":false,"name":"Jordan Wirfs-Brock"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["S. Sandra Bae"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1062","time_end":"","time_stamp":"","time_start":"","title":"Integrating Annotations into the Design Process for Sonifications and Physicalizations","uid":"v-short-1062","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Large Language Models (LLMs) have demonstrated remarkable versatility in visualization authoring, but often generate suboptimal designs that are invalid or fail to adhere to design guidelines for effective visualization. We present Bavisitter, a natural language interface that integrates established visualization design guidelines into LLMs. Based on our survey on the design issues in LLM-generated visualizations, Bavisitter monitors the generated visualizations during a visualization authoring dialogue to detect an issue. When an issue is detected, it intervenes in the dialogue, suggesting possible solutions to the issue by modifying the prompts. We also demonstrate two use cases where Bavisitter detects and resolves design issues from the actual LLM-generated visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jiwnchoi@skku.edu","is_corresponding":true,"name":"Jiwon Choi"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"dlwodnd00@skku.edu","is_corresponding":false,"name":"Jaeung Lee"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jmjo@skku.edu","is_corresponding":false,"name":"Jaemin Jo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jiwon Choi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1064","time_end":"","time_stamp":"","time_start":"","title":"Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring","uid":"v-short-1064","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Although many dimensionality reduction (DR) techniques employ stochastic methods for computational efficiency, such as negative sampling or stochastic gradient descent, their impact on the projection has been underexplored. In this work, we investigate how such stochasticity affects the stability of projections and present a novel DR technique, GhostUMAP, to measure the pointwise instability of projections. Our idea is to introduce clones of data points, \"ghosts\", into UMAP's layout optimization process. Ghosts are designed to be completely passive: they do not affect any others but are influenced by attractive and repulsive forces from the original data points. After a single optimization run, GhostUMAP can capture the projection instability of data points by measuring the variance with the projected positions of their ghosts. We also present a successive halving technique to reduce the computation of GhostUMAP. Our results suggest that GhostUMAP can reveal unstable data points with a reasonable computational overhead.","accessible_pdf":false,"authors":[{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"mw.jung@skku.edu","is_corresponding":true,"name":"Myeongwon Jung"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"takanori.fujiwara@liu.se","is_corresponding":false,"name":"Takanori Fujiwara"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jmjo@skku.edu","is_corresponding":false,"name":"Jaemin Jo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Myeongwon Jung"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1065","time_end":"","time_stamp":"","time_start":"","title":"GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction","uid":"v-short-1065","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Integrating textual content, such as titles, annotations, and captions, with visualizations facilitates comprehension and takeaways during data exploration. Yet current tools often lack mechanisms for integrating meaningful text with visual data. This paper introduces DASH, a bimodal data exploration tool that supports integrating semantic levels into the interactive process of visualization and text-based analysis. DASH operationalizes a modified version of Lundgard et al.'s semantic hierarchy model that categorizes data descriptions into four levels ranging from basic encodings to high-level insights. By leveraging this structured semantic level framework and a large language model's text generation capabilities, DASH enables the creation of data-driven narratives via drag-and-drop user interaction. Through a preliminary user evaluation, we discuss the utility of DASH's text and chart integration capabilities when participants perform data exploration with the tool. Based on the study's feedback and observations, we discuss implications for designing unified text and chart authoring tools.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"bromley.denny@gmail.com","is_corresponding":true,"name":"Dennis Bromley"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dennis Bromley"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1068","time_end":"","time_stamp":"","time_start":"","title":"DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations","uid":"v-short-1068","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Recent advancements in vision models have significantly enhanced their ability to perform complex chart understanding tasks, such as chart captioning and chart question answering. However, assessing how these models process charts remains challenging. Existing benchmarks only coarsely evaluate how well the model performs the given task without thoroughly evaluating the underlying mechanisms that drive performance, such as how models extract image embeddings. This gap limits our understanding of the model's perceptual capabilities regarding fundamental graphical components. Therefore, we introduce a novel evaluation framework designed to assess the graphical perception of image embedding models. In the context of chart comprehension, we examine two main aspects of channel effectiveness: accuracy and discriminability of various visual channels. We first assess channel accuracy through the linearity of embeddings, which is the degree to which the perceived magnitude is proportional to the size of the stimulus. % based on the assumption that perceived magnitude should be proportional to the size of Conversely, distances between embeddings serve as a measure of discriminability; embeddings that are far apart can be considered discriminable. Our experiments on a general image embedding model, CLIP, provided that it perceives channel accuracy differently from humans and demonstrated distinct discriminability in specific channels such as length, tilt, and curvature. We aim to extend our work as a more general benchmark for reliable visual encoders and enhance a model for two distinctive goals for future applications: precise chart comprehension and mimicking human perception.","accessible_pdf":false,"authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"dtngus0111@gmail.com","is_corresponding":true,"name":"Soohyun Lee"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jangsus1@snu.ac.kr","is_corresponding":false,"name":"Minsuk Chang"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"shpark@hcil.snu.ac.kr","is_corresponding":false,"name":"Seokhyeon Park"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Soohyun Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1072","time_end":"","time_stamp":"","time_start":"","title":"Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness","uid":"v-short-1072","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data visualizations are reaching global audiences. As people who use Right-to-left (RTL) scripts constitute over a billion potential data visualization users, a need emerges to investigate how visualizations are communicated to them. Web design guidelines exist to assist designers in adapting different reading directions, yet we lack a similar standard for visualization design. This paper investigates the design patterns of visualizations with RTL scripts. We collected 128 visualizations from data-driven articles published in Arabic news outlets and analyzed their chart composition, textual elements, and sources. Our analysis suggests that designers tend to apply RTL approaches more frequently for categorical data. In other situations, we observed a mix of Left-to-right (LTR) and RTL approaches for chart directions and structures, sometimes inconsistently utilized within the same article. We reflect on this lack of clear guidelines for RTL data visualizations and derive implications for visualization authoring tools and future research directions.","accessible_pdf":false,"authors":[{"affiliations":["University College London, London, United Kingdom","UAE University , Al Ain, United Arab Emirates"],"email":"muna.alebri.19@ucl.ac.uk","is_corresponding":true,"name":"Muna Alebri"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ntrakotondravony@wpi.edu","is_corresponding":false,"name":"No\u00eblle Rakotondravony"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ltharrison@wpi.edu","is_corresponding":false,"name":"Lane Harrison"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Muna Alebri"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1078","time_end":"","time_stamp":"","time_start":"","title":"Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content","uid":"v-short-1078","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Image datasets serve as the foundation for machine learning models in computer vision, significantly influencing model capabilities, performance, and biases alongside architectural considerations. Therefore, understanding the composition and distribution of these datasets has become increasingly crucial. To address the need for intuitive exploration of these datasets, we propose AEye, an extensible and scalable visualization tool tailored to image datasets. AEye utilizes a contrastively trained model to embed images into semantically meaningful high-dimensional representations, facilitating data clustering and organization. To visualize the high-dimensional representations, we project them onto a two-dimensional plane and arrange images in layers so users can seamlessly navigate and explore them interactively. Furthermore, AEye facilitates semantic search functionalities for both text and image queries, enabling users to search for content. We open-source the codebase for AEye, and provide a simple configuration to add additional datasets.","accessible_pdf":false,"authors":[{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"fgroetschla@ethz.ch","is_corresponding":false,"name":"Florian Gr\u00f6tschla"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"lanzendoerfer@ethz.ch","is_corresponding":false,"name":"Luca A Lanzend\u00f6rfer"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"mcalzavara@student.ethz.ch","is_corresponding":false,"name":"Marco Calzavara"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"wattenhofer@ethz.ch","is_corresponding":false,"name":"Roger Wattenhofer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Florian Gr\u221a\u2202tschla"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1079","time_end":"","time_stamp":"","time_start":"","title":"AEye: A Visualization Tool for Image Datasets","uid":"v-short-1079","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Sine illusion happens when the more quickly changing pairs of lines lead to bigger underestimates of the delta between them. We evaluate three visual manipulations on mitigating sine illusions: dotted lines, aligned gridlines, and offset gridlines via a user study. We asked participants to compare the deltas between two lines at two time points and found aligned gridlines to be the most effective in mitigating sine illusions. Using data from the user study, we produced a model that predicts the impact of the sine illusion in line charts by accounting for the ratio of the vertical distance between the two points of comparison. When the ratio is less than 50\\%, participants begin to be influenced by the sine illusion. This effect can be significantly exacerbated when the difference between the two deltas falls under 30\\%. We compared two explanations for the sine illusion based on our data: either participants were mistakenly using the perpendicular distance between the two lines to make their comparison (the perpendicular explanation), or they incorrectly relied on the length of the line segment perpendicular to the angle bisector of the bottom and top lines (the equal triangle explanation). We found the equal triangle explanation to be the more predictive model explaining participant behaviors.","accessible_pdf":false,"authors":[{"affiliations":["Google LLC, San Francisco, United States"],"email":"cknit1999@gmail.com","is_corresponding":false,"name":"Clayton J Knittel"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"jawuah3@gatech.edu","is_corresponding":false,"name":"Jane Awuah"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"franconeri@northwestern.edu","is_corresponding":false,"name":"Steven L Franconeri"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"cxiong@gatech.edu","is_corresponding":true,"name":"Cindy Xiong Bearfield"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Cindy Xiong Bearfield"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1081","time_end":"","time_stamp":"","time_start":"","title":"Gridlines Mitigate Sine Illusion in Line Charts","uid":"v-short-1081","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In healthcare, AI techniques are widely used for tasks like risk assessment and anomaly detection. Despite AI's potential as a valuable assistant, its role in complex medical data analysis often oversimplifies human-AI collaboration dynamics. To address this, we collaborated with a local hospital, engaging six physicians and one data scientist in a formative study. From this collaboration, we propose a framework integrating two-phase interactive visualization systems: one for Human-Led, AI-Assisted Retrospective Analysis and another for AI-Mediated, Human-Reviewed Iterative Modeling. This framework aims to enhance understanding and discussion around effective human-AI collaboration in healthcare.","accessible_pdf":false,"authors":[{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"ouyy@shanghaitech.edu.cn","is_corresponding":true,"name":"Yang Ouyang"},{"affiliations":["University of Illinois at Urbana-Champaign, Champaign, United States","University of Illinois at Urbana-Champaign, Champaign, United States"],"email":"zhang414@illinois.edu","is_corresponding":false,"name":"Chenyang Zhang"},{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"wanghe1@shanghaitech.edu.cn","is_corresponding":false,"name":"He Wang"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"15301050137@fudan.edu.cn","is_corresponding":false,"name":"Tianle Ma"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"cjiang_fdu@yeah.net","is_corresponding":false,"name":"Chang Jiang"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"522649732@qq.com","is_corresponding":false,"name":"Yuheng Yan"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"yan.zuoqin@zs-hospital.sh.cn","is_corresponding":false,"name":"Zuoqin Yan"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong","Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"mxj@cse.ust.hk","is_corresponding":false,"name":"Xiaojuan Ma"},{"affiliations":["Southeast University, Nanjing, China","Southeast University, Nanjing, China"],"email":"cshiag@connect.ust.hk","is_corresponding":false,"name":"Chuhan Shi"},{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yang Ouyang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1089","time_end":"","time_stamp":"","time_start":"","title":"A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling","uid":"v-short-1089","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualizing high dimensional data is challenging, since any dimensionality reduction technique will distort distances. A classic method in cartography\u2013Tissot\u2019s Indicatrix, specific to sphere-to-plane maps\u2013visualizes distortion using ellipses. Inspired by this idea, we describe the hypertrix: a method for representing distortions that occur when data is projected from arbitrarily high dimensions onto a 2D plane. We demonstrate our technique through synthetic and real-world datasets, and describe how this indicatrix can guide interpretations of nonlinear dimensionality reduction","accessible_pdf":false,"authors":[{"affiliations":["Harvard University, Boston, United States"],"email":"sraval@g.harvard.edu","is_corresponding":true,"name":"Shivam Raval"},{"affiliations":["Harvard University, Cambridge, United States","Google Research, Cambridge, United States"],"email":"viegas@google.com","is_corresponding":false,"name":"Fernanda Viegas"},{"affiliations":["Harvard University, Cambridge, United States","Google Research, Cambridge, United States"],"email":"wattenberg@gmail.com","is_corresponding":false,"name":"Martin Wattenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shivam Raval"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1090","time_end":"","time_stamp":"","time_start":"","title":"Hypertrix: An indicatrix for high-dimensional visualizations","uid":"v-short-1090","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Coordinated multiple views (CMV) in a visual analytics system can help users explore multiple data representations simultaneously with linked interactions. However, the implementation of coordinated multiple views can be challenging. Without standard software libraries, visualization designers need to re-implement CMV during the development of each system. We introduce use-coordination, a grammar and software library that supports the efficient implementation of CMV. The grammar defines a JSON-based representation for an abstract coordination model from the information visualization literature. We contribute an optional extension to the model and grammar that allows for hierarchical coordination. Through three use cases, we show that use-coordination enables implementation of CMV in systems containing not only basic statistical charts but also more complex visualizations such as medical imaging volumes. We describe six software extensions, including a graphical editor for manipulation of coordination, which showcase the potential to build upon our coordination-focused declarative approach.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"mark_keller@hms.harvard.edu","is_corresponding":true,"name":"Mark S Keller"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"trevor_manz@g.harvard.edu","is_corresponding":false,"name":"Trevor Manz"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mark S Keller"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1096","time_end":"","time_stamp":"","time_start":"","title":"Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views","uid":"v-short-1096","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization tools now commonly present automated insights highlighting salient data patterns, including correlations, distributions, outliers, and differences, among others. While these insights are valuable for data exploration and chart interpretation, users currently only have a binary choice of accepting or rejecting them, lacking the flexibility to refine the system logic or customize the insight generation process. To address this limitation, we present GROOT, a prototype system that allows users to proactively specify and refine automated data insights. The system allows users to directly manipulate chart elements to receive insight recommendations based on their selections. Additionally, GROOT provides users with a manual editing interface to customize, reconfigure, or add new insights to individual charts and propagate them to future explorations. We describe a usage scenario to illustrate how these features collectively support insight editing and configuration, and discuss opportunities for future work including incorporating LLMs, improving semantic data and visualization search, and supporting insight management.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, College Park, United States","Tableau Research, Seattle, United States"],"email":"sgathani@cs.umd.edu","is_corresponding":true,"name":"Sneha Gathani"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"amcrisan@uwaterloo.ca","is_corresponding":false,"name":"Anamaria Crisan"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"arjun.srinivasan.10@gmail.com","is_corresponding":false,"name":"Arjun Srinivasan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sneha Gathani"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1097","time_end":"","time_stamp":"","time_start":"","title":"Groot: An Interface for Editing and Configuring Automated Data Insights","uid":"v-short-1097","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Confidence scores of automatic speech recognition (ASR) outputs are often inadequately communicated, preventing its seamless integration into analytical workflows. In this paper, we introduce ConFides, a visual analytic system developed in collaboration with intelligence analysts to address this issue. ConFides aims to aid exploration and post-AI-transcription editing by visually representing the confidence associated with the transcription. We demonstrate how our tool can assist intelligence analysts who use ASR outputs in their analytical and exploratory tasks and how it can help mitigate misinterpretation of crucial information. We also discuss opportunities for improving textual data cleaning and model transparency for human-machine collaboration.","accessible_pdf":false,"authors":[{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"sha@wustl.edu","is_corresponding":true,"name":"Sunwoo Ha"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"chaelim@wustl.edu","is_corresponding":false,"name":"Chaehun Lim"},{"affiliations":["Smith College, Northampton, United States"],"email":"jcrouser@smith.edu","is_corresponding":false,"name":"R. Jordan Crouser"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"alvitta@wustl.edu","is_corresponding":false,"name":"Alvitta Ottley"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sunwoo Ha"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1100","time_end":"","time_stamp":"","time_start":"","title":"ConFides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration","uid":"v-short-1100","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Color coding, a technique assigning specific colors to different information types, has proven advantages in aiding human cognitive activities, especially reading and comprehension. The rise of Large Language Models (LLMs) has streamlined document coding, enabling simple automatic text labeling with various schemes. This has the potential to make color-coding more accessible and benefit more users. However, the importance of color choice, particularly in aiding textual information seeking through various color schemes, is not well studied. This paper presents a user study assessing the effectiveness of various color schemes generated by different base colors for readers' information-seeking performance in text documents color-coded by LLMs. Participants performed information-seeking tasks within scholarly papers' abstracts, each coded with a different scheme under time constraints. Results showed that non-analogous color schemes lead to better information-seeking performance, in both accuracy and response time. Yellow-inclusive color schemes lead to shorter response times and are also preferred by most participants. These could inform the better choice of color scheme for annotating text documents. As LLMs advance document coding, we advocate for more research focusing on the \"color\" aspect of color-coding techniques.","accessible_pdf":false,"authors":[{"affiliations":["Pennsylvania State University, University Park, United States"],"email":"samnghoyin@gmail.com","is_corresponding":true,"name":"Ho Yin Ng"},{"affiliations":["Pennsylvania State University, University Park, United States"],"email":"zmh5268@psu.edu","is_corresponding":false,"name":"Zeyu He"},{"affiliations":["Pennsylvania State University, University Park , United States"],"email":"txh710@psu.edu","is_corresponding":false,"name":"Ting-Hao Kenneth Huang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ho Yin Ng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1101","time_end":"","time_stamp":"","time_start":"","title":"What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?","uid":"v-short-1101","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Homophily refers to the tendency of individuals to associate with others who are similar to them in characteristics, such as, race, ethnicity, age, gender, or interests. In this paper, we investigate if individuals exhibit racial homophily when viewing visualizations, using mass shooting data in the United States as the example topic. We conducted a crowdsourced experiment (N=450) where each participant was shown a visualization displaying the counts of mass shooting victims, highlighting the counts for one of three racial groups (White, Black, or Hispanic). Participants were assigned to view visualizations highlighting their own race or a different race to assess the influence of racial concordance on changes in affect (emotion) and attitude towards gun control. While we did not find evidence of homophily, the results showed a significant negative shift in affect across all visualization conditions. Notably, political ideology significantly impacted changes in affect, with more liberal views correlating with a more negative affect change. Our findings underscore the complexity of reactions to mass shooting visualizations and highlight the need for additional measures for understanding homophily in visualizations.","accessible_pdf":false,"authors":[{"affiliations":["New York University, Brooklyn, United States"],"email":"pt2393@nyu.edu","is_corresponding":true,"name":"Poorna Talkad Sukumar"},{"affiliations":["New York University, Brooklyn, United States"],"email":"mporfiri@nyu.edu","is_corresponding":false,"name":"Maurizio Porfiri"},{"affiliations":["New York University, New York, United States"],"email":"onov@nyu.edu","is_corresponding":false,"name":"Oded Nov"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Poorna Talkad Sukumar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1109","time_end":"","time_stamp":"","time_start":"","title":"Connections Beyond Data: Exploring Homophily With Visualizations","uid":"v-short-1109","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"As visualization literacy and its implications gain prominence, we need effective methods to teach and prepare students for the variety of visualizations they might encounter in an increasingly data-driven world. Recently, the potential of comics has been recognized in various data visualization contexts, including educational settings. In this paper, we describe the development of a workshop in which we use our \u201ccomic construction kit\u201d as a tool for students to understand various data visualization techniques through an interactive creative approach of creating explanatory comics. We report on our insights and learnings from holding eight workshops with high school students, high school teachers, university students, and university lecturers, aiming to enhance the landscape of hands-on visualization activities that can enrich the visualization classroom. The comic construction kit and all supplemental materials are open source under a CC-BY license and available at https://fhstp.github.io/comixplain/vis4schools.html.","accessible_pdf":false,"authors":[{"affiliations":["St. P\u00f6lten University of Applied Sciences, St. P\u00f6lten, Austria"],"email":"magdalena.boucher@fhstp.ac.at","is_corresponding":true,"name":"Magdalena Boucher"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"christina.stoiber@fhstp.ac.at","is_corresponding":false,"name":"Christina Stoiber"},{"affiliations":["School of Informatics, Communications and Media, Hagenberg im M\u00fchlkreis, Austria"],"email":"mandy.keck@fh-hagenberg.at","is_corresponding":false,"name":"Mandy Keck"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"victor.oliveira@fhstp.ac.at","is_corresponding":false,"name":"Victor Adriel de Jesus Oliveira"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"wolfgang.aigner@fhstp.ac.at","is_corresponding":false,"name":"Wolfgang Aigner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Magdalena Boucher"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1114","time_end":"","time_stamp":"","time_start":"","title":"The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations","uid":"v-short-1114","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualizations support rapid analysis of scientific datasets, allowing viewers to glean aggregate information (e.g., the mean) within split-seconds. While prior research has explored this ability in conventional charts, it is unclear if spatial visualizations used by computational scientists afford a similar ensemble perception capacity. We investigate people's ability to estimate two summary statistics, mean and variance, from pseudocolor scalar fields. In a crowdsourced experiment, we find that participants can reliably characterize both statistics, although variance discrimination requires a much stronger signal. Multi-hue and diverging colormaps outperformed monochromatic, luminance ramps in aiding this extraction. Analysis of qualitative responses suggests that participants often estimate the distribution of hotspots and valleys as visual proxies for data statistics. These findings suggest that people's summary interpretation of spatial datasets is likely driven by the appearance of discrete color segments, rather than assessments of overall luminance. Implicit color segmentation in quantitative displays could thus prove more useful than previously assumed by facilitating quick, gist-level judgments about color-coded visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"vmateevitsi@anl.gov","is_corresponding":false,"name":"Victor A. Mateevitsi"},{"affiliations":["Argonne National Laboratory, Lemont, United States","University of Illinois Chicago, Chicago, United States"],"email":"papka@anl.gov","is_corresponding":false,"name":"Michael E. Papka"},{"affiliations":["Indiana University, Indianapolis, United States"],"email":"redak@iu.edu","is_corresponding":true,"name":"Khairi Reda"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Khairi Reda"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1116","time_end":"","time_stamp":"","time_start":"","title":"Science in a Blink: Supporting Ensemble Perception in Scalar Fields","uid":"v-short-1116","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Geovisualizations are powerful tools for exploratory spatial analysis, enabling sighted users to discern patterns, trends, and relationships within geographic data. However, these visual tools have remained largely inaccessible to screen-reader users. We present AltGeoViz, a new system we designed to facilitate geovisualization exploration for these users. AltGeoViz dynamically generates alt-text descriptions based on the user's current map view, providing summaries of spatial patterns and descriptive statistics. In a study of five screen-reader users, we found that AltGeoViz enabled them to interact with geovisualizations in previously infeasible ways. Participants demonstrated a clear understanding of data summaries and their location context, and they could synthesize spatial understandings of their explorations. Moreover, we identified key areas for improvement, such as the addition of intuitive spatial navigation controls and comparative analysis features.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"chuchuli@cs.washington.edu","is_corresponding":true,"name":"Chu Li"},{"affiliations":["University of Washington, Seattle, United States"],"email":"ypang2@cs.washington.edu","is_corresponding":false,"name":"Rock Yuren Pang"},{"affiliations":["University of Washington, Seattle, United States"],"email":"asharif@cs.washington.edu","is_corresponding":false,"name":"Ather Sharif"},{"affiliations":["University of Washington, Seattle, United States"],"email":"chheda@cs.washington.edu","is_corresponding":false,"name":"Arnavi Chheda-Kothary"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jonf@cs.uw.edu","is_corresponding":false,"name":"Jon E. Froehlich"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chu Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1117","time_end":"","time_stamp":"","time_start":"","title":"AltGeoViz: Facilitating Accessible Geovisualization","uid":"v-short-1117","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Analyzing uncertainty in spatial data is a vital task in many domains, as for example with climate and weather simulation ensembles. Although there are many methods to support the analysis of the uncertainty, such as uncertain isocontours or calculation of statistical values, it is still a challenge to get an overview of the uncertainty and then decide a further method or parameter to analyze the data, or investigate further some region or point of interest. We present cumulative height fields, a visualization method for 2D scalar field ensembles using the marginal empirical distribution function and show preliminary results using volume rendering and slicing for the Max Planck Institute Grand Ensemble.","accessible_pdf":false,"authors":[{"affiliations":["Institute of Computer Science, Leipzig University, Leipzig, Germany"],"email":"daetz@informatik.uni-leipzig.de","is_corresponding":true,"name":"Tomas Rodolfo Daetz Chacon"},{"affiliations":["German Climate Computing Center (DKRZ), Hamburg, Germany"],"email":"boettinger@dkrz.de","is_corresponding":false,"name":"Michael B\u00f6ttinger"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"scheuermann@informatik.uni-leipzig.de","is_corresponding":false,"name":"Gerik Scheuermann"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"heine@informatik.uni-leipzig.de","is_corresponding":false,"name":"Christian Heine"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tomas Rodolfo Daetz Chacon"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1119","time_end":"","time_stamp":"","time_start":"","title":"Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function","uid":"v-short-1119","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Many real-world networks contain structurally-equivalent nodes. These are defined as vertices that share the same set of neighboring nodes, making them interchangeable with a traditional graph layout approach. However, many real-world graphs also have properties associated with nodes, adding additional meaning to them. We present an approach for swapping locations of structurally-equivalent nodes in graph layout so that those with more similar properties have closer proximity to each other. This improves the usefulness of the visualization from an attribute perspective without negatively impacting the visualization from a structural perspective. We include an algorithm for finding these sets of nodes in linear time, as well as methodologies for ordering nodes based on their attribute similarity, which works for scalar, ordinal, multidimensional, and categorical data.","accessible_pdf":false,"authors":[{"affiliations":["Pacific Northwest National Lab, Richland, United States"],"email":"patrick.mackey@pnnl.gov","is_corresponding":true,"name":"Patrick Mackey"},{"affiliations":["University of Arizona, Tucson, United States","Pacific Northwest National Laboratory, Richland, United States"],"email":"jacobmiller1@arizona.edu","is_corresponding":false,"name":"Jacob Miller"},{"affiliations":["Pacific Northwest National Laboratory, Richland, United States"],"email":"liz.f@pnnl.gov","is_corresponding":false,"name":"Liz Faultersack"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Patrick Mackey"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1121","time_end":"","time_stamp":"","time_start":"","title":"Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes","uid":"v-short-1121","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Psychological research often involves understanding psychological constructs through conducting factor analysis on data collected by a questionnaire, which can comprise hundreds of questions. Without interactive systems for interpreting factor models, researchers are frequently exposed to subjectivity, potentially leading to misinterpretations or overlooked crucial information. This paper introduces FAVis, a novel interactive visualization tool designed to aid researchers in interpreting and evaluating factor analysis results. FAVis enhances the understanding of relationships between variables and factors by supporting multiple views for visualizing factor loadings and correlations, allowing users to analyze information from various perspectives. The primary feature of FAVis is to enable users to set optimal thresholds for factor loadings to balance clarity and information retention. FAVis also allows users to assign tags to variables, enhancing the understanding of factors by linking them to their associated psychological constructs. We conduct a case study on a dataset from the Motivational State Questionnaire, utilizing a three-factor common factor model. Our user study demonstrates the utility of FAVis in various tasks.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States","University of Notre Dame, Notre Dame, United States"],"email":"ylu22@nd.edu","is_corresponding":true,"name":"Yikai Lu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yikai Lu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1126","time_end":"","time_stamp":"","time_start":"","title":"FAVis: Visual Analytics of Factor Analysis for Psychological Research","uid":"v-short-1126","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In this paper, we analyze the Apple Vision Pro hardware and the visionOS software platform, assessing their capabilities for volume rendering of structured grids, a prevalent technique across various applications. The Apple Vision Pro supports multiple display modes, from classical augmented reality (AR) using video see-through technology to immersive virtual reality (VR) environments that exclusively render virtual objects. These modes utilize different APIs and exhibit distinct capabilities. Our focus is on direct volume rendering, selected for its implementation challenges due to the native graphics APIs being predominantly oriented towards surface shading. Volume rendering is particularly vital in fields where AR and VR visualizations offer substantial benefits, such as in medicine and manufacturing. Despite its initial high cost, we anticipate that the Vision Pro will become more accessible and affordable over time, following Apple's track record of market expansion. As these devices become more prevalent, understanding how to effectively program and utilize them becomes increasingly important, offering significant opportunities for innovation and practical applications in various sectors.","accessible_pdf":false,"authors":[{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"camilla.hrycak@uni-due.de","is_corresponding":true,"name":"Camilla Hrycak"},{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"david.lewakis@stud.uni-due.de","is_corresponding":false,"name":"David Lewakis"},{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"jens.krueger@uni-due.de","is_corresponding":false,"name":"Jens Harald Krueger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Camilla Hrycak"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1127","time_end":"","time_stamp":"","time_start":"","title":"Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization","uid":"v-short-1127","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization, from simple line plots to complex high-dimensional visual analysis systems, has established itself throughout numerous domains to explore, analyze, and evaluate data. Applying such visualizations in the context of simulation science where High-Performance Computing (HPC) produces ever-growing amounts of data that is more complex, potentially multidimensional, and multi-modal, takes up resources and a high level of technological experience often not available to domain experts. In this work, we present DaVE - a curated database of visualization examples, which aims to provide state-of-the-art and advanced visualization methods that arise in the context of HPC applications. Based on domain- or data-specific descriptors entered by the user, DaVE provides a list of appropriate visualization techniques, each accompanied by descriptions, examples, references, and resources. Sample code, adaptable container templates, and recipes for easy integration in HPC applications can be downloaded for easy access to high-fidelity visualizations. While the database is currently filled with a limited number of entries based on a broad evaluation of needs and challenges of current HPC users, DaVE is designed to be easily extended by experts from both the visualization and HPC communities.","accessible_pdf":false,"authors":[{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"koenen@informatik.rwth-aachen.de","is_corresponding":true,"name":"Jens Koenen"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"m.petersen@rptu.de","is_corresponding":false,"name":"Marvin Petersen"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"garth@rptu.de","is_corresponding":false,"name":"Christoph Garth"},{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"gerrits@vis.rwth-aachen.de","is_corresponding":false,"name":"Tim Gerrits"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jens Koenen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1130","time_end":"","time_stamp":"","time_start":"","title":"DaVE - A Curated Database of Visualization Examples","uid":"v-short-1130","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Humans struggle to perceive and interpret high-dimensional data. Therefore, high-dimensional data are often projected into two dimensions for visualization. Many applications benefit from complex nonlinear dimensionality reduction techniques, but the effects of individual high-dimensional features are hard to explain in the two-dimensional space. Most visualization solutions use multiple two-dimensional plots, each showing the effect of one high-dimensional feature in two dimensions; this approach creates a need for a visual inspection of k plots for a k-dimensional input space. Our solution, Feature Clock, provides a novel approach that eliminates the need to inspect these k plots to grasp the influence of original features on the data structure depicted in two dimensions. Feature Clock enhances the explainability and compactness of visualizations of embedded data and is available in an open-source Python library.","accessible_pdf":false,"authors":[{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"ovcharenko.folga@gmail.com","is_corresponding":true,"name":"Olga Ovcharenko"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"rita.sevastjanova@uni-konstanz.de","is_corresponding":false,"name":"Rita Sevastjanova"},{"affiliations":["ETH Zurich, Z\u00fcrich, Switzerland"],"email":"valentina.boeva@inf.ethz.ch","is_corresponding":false,"name":"Valentina Boeva"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Olga Ovcharenko"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1135","time_end":"","time_stamp":"","time_start":"","title":"Feature Clock: High-Dimensional Effects in Two-Dimensional Plots","uid":"v-short-1135","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Reconstruction of 3D scenes from 2D images is a technical challenge that impacts domains from Earth and planetary sciences and space exploration to augmented and virtual reality. Typically, reconstruction algorithms first identify common features across images and then minimize reconstruction errors after estimating the shape of the terrain. This bundle adjustment (BA) step optimizes around a single, simplifying scalar value that obfuscates many possible causes of reconstruction errors (e.g., initial estimate of the position and orientation of the camera, lighting conditions, ease of feature detection in the terrain). Reconstruction errors can lead to inaccurate scientific inferences or endanger a spacecraft exploring a remote environment. To address this challenge, we present VECTOR, a visual analysis tool that improves error inspection for stereo reconstruction BA. VECTOR provides analysts with previously unavailable visibility into feature locations, camera pose, and computed 3D points. VECTOR was developed in partnership with the Perseverance Mars Rover and Ingenuity Mars Helicopter terrain reconstruction team at the NASA Jet Propulsion Laboratory. We report on how this tool was used to debug and improve terrain reconstruction for the Mars 2020 mission.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"racquel.fygenson@gmail.com","is_corresponding":false,"name":"Racquel Fygenson"},{"affiliations":["Weta FX, Auckland, New Zealand"],"email":"kjawad@andrew.cmu.edu","is_corresponding":false,"name":"Kazi Jawad"},{"affiliations":["Art Center, Pasadena, United States"],"email":"zongzhanisabelli@gmail.com","is_corresponding":false,"name":"Zongzhan Li"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"francois.ayoub@jpl.nasa.gov","is_corresponding":false,"name":"Francois Ayoub"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"bob.deen@jpl.nasa.gov","is_corresponding":false,"name":"Robert G Deen"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"sd@scottdavidoff.com","is_corresponding":false,"name":"Scott Davidoff"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["NASA-JPL, Pasadena, United States"],"email":"mauricio.a.hess.flores@jpl.nasa.gov","is_corresponding":true,"name":"Mauricio Hess-Flores"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mauricio Hess-Flores"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1144","time_end":"","time_stamp":"","time_start":"","title":"Opening the black box of 3D reconstruction error analysis with VECTOR","uid":"v-short-1144","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Millions of runners rely on smart watches that display running-related metrics such as pace, heart rate and distance for training and racing -- mostly with text and numbers. Although research tells us that visualizations are a good alternative to text on smart watches, we know little about how visualizations can help in realistic running scenarios. We conducted a study in which 20 runners completed running-related tasks on an outdoor track using both text and visualizations. Our results show that runners are 1.5 to 8 times faster in completing those tasks with visualizations than with text, prefer visualizations to text, and would use such visualizations while running -- were they available on their smart watch.","accessible_pdf":false,"authors":[{"affiliations":["University of Victoria, Victoria, Canada"],"email":"sarinaksj@uvic.ca","is_corresponding":false,"name":"Sarina Kashanj"},{"affiliations":["University of Victoria, Victoira, Canada","Delft University of Technology, Delft, Netherlands"],"email":"xiyao.wang23@gmail.com","is_corresponding":false,"name":"Xiyao Wang"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Charles Perin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1146","time_end":"","time_stamp":"","time_start":"","title":"Visualizations on Smart Watches while Running: It Actually Helps!","uid":"v-short-1146","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Exploratory visual data analysis tools empower data analysts to efficiently and intuitively explore data insights throughout the entire analysis cycle. However, the gap between common programmatic analysis (e.g., within computational notebooks) and exploratory visual analysis leads to a disjointed and inefficient data analysis experience. To bridge this gap, we developed PyGWalker, a Python library that offers on-the-fly assistance for exploratory visual data analysis. It features a lightweight and intuitive GUI with a shelf builder modality. Its loosely coupled architecture supports multiple computational environments to accommodate varying data sizes. Since its release in February 2023, PyGWalker has gained much attention, with 468k downloads on PyPI and over 9.8k stars on GitHub as of April 2024. This demonstrates its value to the data science and visualization community, with researchers and developers integrating it into their own applications and studies.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China","Kanaries Data Inc., Hangzhou, China"],"email":"yue.yu@connect.ust.hk","is_corresponding":true,"name":"Yue Yu"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"lshenaj@connect.ust.hk","is_corresponding":false,"name":"Leixian Shen"},{"affiliations":["Kanaries Data Inc., Hangzhou, China"],"email":"feilong@kanaries.net","is_corresponding":false,"name":"Fei Long"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":["Kanaries Data Inc., Hangzhou, China"],"email":"haochen@kanaries.net","is_corresponding":false,"name":"Hao Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yue Yu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1150","time_end":"","time_stamp":"","time_start":"","title":"PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis","uid":"v-short-1150","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Augmented reality (AR) area labels can highlight real-life objects, visualize real world regions with arbitrary boundaries, and show invisible objects or features. Environment conditions such as lighting and clutter can decrease fixed or passive label visibility, and labels that have high opacity levels can occlude crucial details in the environment. We design and evaluate active AR area label visualization modes to enhance visibility across real-life environments, while still retaining environment details within the label. For this, we define a distant characteristic color from the environment in perceptual CIELAB space, then introduce spatial variations among label pixel colors based on the underlying environment variation. In a user study with 18 participants, we discovered that our active label visualization modes can be comparable in visibility to a fixed green baseline by Gabbard et al., and can outperform it with added spatial variation in cluttered environments, across varying levels of lighting (e.g., nighttime), and in environments with colors similar to the fixed baseline color.","accessible_pdf":false,"authors":[{"affiliations":["Brown University, Providence, United States"],"email":"hojung_kwon@brown.edu","is_corresponding":false,"name":"Hojung Kwon"},{"affiliations":["Brown University, Providence, United States"],"email":"yuanbo_li@brown.edu","is_corresponding":false,"name":"Yuanbo Li"},{"affiliations":["Brown University, Providence, United States"],"email":"chloe_ye2019@hotmail.com","is_corresponding":false,"name":"Xiaohan Ye"},{"affiliations":["Brown University, Providence, United States"],"email":"praccho_muna-mcquay@brown.edu","is_corresponding":false,"name":"Praccho Muna-McQuay"},{"affiliations":["Duke University, Durham, United States"],"email":"liuren.yin@duke.edu","is_corresponding":false,"name":"Liuren Yin"},{"affiliations":["Brown University, Providence, United States"],"email":"james_tompkin@brown.edu","is_corresponding":true,"name":"James Tompkin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["James Tompkin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1155","time_end":"","time_stamp":"","time_start":"","title":"Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality","uid":"v-short-1155","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Compound graphs are networks in which vertices can be grouped into larger subsets, with these subsets capable of further grouping, resulting in a nesting that can be many levels deep. Such graphs arise in several applications including biological workflows, chemical equations, and computational data flow analysis. Common layouts prioritize the lowest level of the grouping, down to the individual ungrouped vertices, which can make the higher level grouped structures more difficult to discern, especially in deeply nested networks. We contribute an overview+detail layout that preserves the saliency of the higher level network structure when groups are expanded to show internal nested structure. Our layout draws inner structures adjacent to their parents, using a modified tree layout to place substructures. We describe our algorithm and then present case studies demonstrating the layout's utility to a domain expert working on data flow analysis. Finally, we discuss network parameters and analysis situations in which our layout is well suited.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"hatch.on27@gmail.com","is_corresponding":true,"name":"Chang Han"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"lieffers@arizona.edu","is_corresponding":false,"name":"Justin Lieffers"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"claytonm@arizona.edu","is_corresponding":false,"name":"Clayton Morrison"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chang Han"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1156","time_end":"","time_stamp":"","time_start":"","title":"An Overview + Detail Layout for Visualizing Compound Graphs","uid":"v-short-1156","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"With two studies, we assess how different walking trajectories (straight line, circular, and infinity) and speeds (2 km/h, 4 km/h, and 6 km/h) influence the accuracy and response time of participants reading micro visualizations on a smartwatch. We showed our participants common watch face micro visualizations including date, time, weather information, and four complications showing progress charts of fitness data. Our findings suggest that while walking trajectories did not significantly affect reading performance, overall walking activity, especially at high speeds, hurt reading accuracy and, to some extent, response time.","accessible_pdf":false,"authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"fairouz.grioui@vis.uni-stuttgart.de","is_corresponding":true,"name":"Fairouz Grioui"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"research@blascheck.eu","is_corresponding":false,"name":"Tanja Blascheck"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"yaolijie0219@gmail.com","is_corresponding":false,"name":"Lijie Yao"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fairouz Grioui"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1159","time_end":"","time_stamp":"","time_start":"","title":"Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking","uid":"v-short-1159","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Digital twins are an excellent tool to model, visualize, and simulate complex systems, to understand and optimize their operation. In this work, we present the technical challenges of real-time visualization of a digital twin of the Frontier supercomputer. We show the initial prototype and current state of the twin and highlight technical design challenges of visualizing such a large High Performance Computing (HPC) system. The goal is to understand the use of augmented reality as a primary way to extract information and collaborate on digital twins of complex systems. This leverages the spatio-temporal aspect of a 3D representation of a digital twin, with the ability to view historical and real-time telemetry, triggering simulations of a system state and viewing the results, which can be augmented via dashboards for details. Finally, we discuss considerations and opportunities for augmented reality of digital twins of large-scale, parallel computers.","accessible_pdf":false,"authors":[{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"maiterthm@ornl.gov","is_corresponding":true,"name":"Matthias Maiterth"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"brewerwh@ornl.gov","is_corresponding":false,"name":"Wes Brewer"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"dewetd@ornl.gov","is_corresponding":false,"name":"Dane De Wet"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"greenwoodms@ornl.gov","is_corresponding":false,"name":"Scott Greenwood"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kumarv@ornl.gov","is_corresponding":false,"name":"Vineet Kumar"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"hinesjr@ornl.gov","is_corresponding":false,"name":"Jesse Hines"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"bouknightsl@ornl.gov","is_corresponding":false,"name":"Sedrick L Bouknight"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"wangz@ornl.gov","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Hewlett Packard Enterprise, Berkshire, United Kingdom"],"email":"tim.dykes@hpe.com","is_corresponding":false,"name":"Tim Dykes"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"fwang2@ornl.gov","is_corresponding":false,"name":"Feiyi Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Matthias Maiterth"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1161","time_end":"","time_stamp":"","time_start":"","title":"Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities","uid":"v-short-1161","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Integral curves have been widely used to represent and analyze various vector fields. Curve-based clustering and pattern search approaches are usually applied to aid the identification of meaningful patterns from large numbers of integral curves. However, they need not support an interactive, level-of-detail exploration of these patterns. To address this, we propose a Curve Segment Neighborhood Graph (CSNG) to capture the relationships between neighboring curve segments. This graph representation enables us to adapt the fast community detection algorithm, i.e., the Louvain algorithm, to identify individual graph communities from CSNG. Our results show that these communities often correspond to the features of the flow. To achieve a multi-level interactive exploration of the detected communities, we adapt a force-directed layout that allows users to refine and re-group communities based on their domain knowledge. We incorporate the proposed techniques into an interactive system to enable effective analysis and interpretation of complex patterns in large-scale integral curve datasets.","accessible_pdf":false,"authors":[{"affiliations":["University of Houston, Houston, United States"],"email":"nguyenpkk95@gmail.com","is_corresponding":true,"name":"Nguyen K Phan"},{"affiliations":["University of Houston, Houston, United States"],"email":"chengu@cs.uh.edu","is_corresponding":false,"name":"Guoning Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nguyen K Phan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1163","time_end":"","time_stamp":"","time_start":"","title":"Curve Segment Neighborhood-based Vector Field Exploration","uid":"v-short-1163","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Custom animated visualizations of large, complex datasets are helpful across many domains, but they are hard to develop. Much of the difficulty arises from maintaining visualization state across a large set of animated graphical elements that may change in number over time. We contribute Counterpoint, a framework for state management designed to help implement such visualizations in JavaScript. Using Counterpoint, developers can manipulate large collections of marks with reactive attributes that are easy to render in scalable APIs such as Canvas and WebGL. Counterpoint also helps orchestrate the entry and exit of graphical elements using the concept of a rendering \"stage.\" Through a performance evaluation, we show that Counterpoint adds minimal overhead over current high-performance rendering techniques while simplifying implementation. We also provide two examples of visualizations created using Counterpoint that illustrate its flexibility and compatibility with other visualization toolkits as well as considerations for users with disabilities. Counterpoint is open-source and available at https://github.com/cmudig/counterpoint.","accessible_pdf":false,"authors":[{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"vsivaram@andrew.cmu.edu","is_corresponding":true,"name":"Venkatesh Sivaraman"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"fje@cmu.edu","is_corresponding":false,"name":"Frank Elavsky"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"adamperer@cmu.edu","is_corresponding":false,"name":"Adam Perer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Venkatesh Sivaraman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1166","time_end":"","time_stamp":"","time_start":"","title":"Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations","uid":"v-short-1166","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualizing citation relations with network structures is widely used, but the visual complexity can make it challenging for individual researchers to navigate through them. We collected data from 18 researchers using an interface that we designed using network simplification methods and analyzed how users browsed and identified important papers. Our analysis reveals six major patterns used for identifying papers of interest, which can be categorized into three key components: Fields, Bridges, and Foundations, each viewed from two distinct perspectives: layout-oriented and connection-oriented. The connection-oriented approach was found to be more effective for selecting relevant papers, but the layout-oriented method was adopted more often, even though it led to unexpected results and user frustration. Our findings emphasize the importance of integrating these components and the necessity to balance visual layouts with meaningful connections to enhance the effectiveness of citation networks in academic browsing systems.","accessible_pdf":false,"authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"krchoe@hcil.snu.ac.kr","is_corresponding":true,"name":"Kiroong Choe"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"gracekim027@snu.ac.kr","is_corresponding":false,"name":"Eunhye Kim"},{"affiliations":["Dept. of Electrical and Computer Engineering, SNU, Seoul, Korea, Republic of"],"email":"paulmoguri@snu.ac.kr","is_corresponding":false,"name":"Sangwon Park"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kiroong Choe"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1173","time_end":"","time_stamp":"","time_start":"","title":"Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations","uid":"v-short-1173","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The proliferation of misleading visualizations online, particularly during critical events like public health crises and elections, poses a significant risk of misinformation. This work investigates the capability of GPT-4V to detect misleading visualizations. Utilizing a dataset of tweet-visualization pairs with various visual misleaders, we tested GPT-4V under four experimental conditions: naive zero-shot, naive few-shot, guided zero-shot, and guided few-shot. Our results demonstrate that GPT-4V can detect misleading visualizations with moderate accuracy without prior training (naive zero-shot) and that performance considerably improves by providing the model with the definitions of misleaders (guided zero-shot). However, combining definitions with examples of misleaders (guided few-shot) did not yield further improvements. This study underscores the feasibility of using large vision-language models such as GTP-4V to combat misinformation and emphasizes the importance of optimizing prompt engineering to enhance detection accuracy.","accessible_pdf":false,"authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"jhalexander@umass.edu","is_corresponding":false,"name":"Jason Huang Alexander"},{"affiliations":["University of Masssachusetts Amherst, Amherst, United States"],"email":"phnanda@umass.edu","is_corresponding":false,"name":"Priyal H Nanda"},{"affiliations":["Northeastern University, Boston, United States"],"email":"yangkc@iu.edu","is_corresponding":false,"name":"Kai-Cheng Yang"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"asarv@cs.umass.edu","is_corresponding":true,"name":"Ali Sarvghad"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ali Sarvghad"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1177","time_end":"","time_stamp":"","time_start":"","title":"Can GPT-4V Detect Misleading Visualizations?","uid":"v-short-1177","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"An atmospheric front is an imaginary surface that separates two distinct air masses and is commonly defined as the warm-air side of a frontal zone with high gradients of atmospheric temperature and humidity. These fronts are a widely used conceptual model in meteorology, which are often encountered in the literature as two-dimensional (2D) front lines on surface analysis charts. This paper presents a method for computing three-dimensional (3D) atmospheric fronts as surfaces that is capable of extracting continuous and well-confined features suitable for 3D visual analysis, spatio-temporal tracking, and statistical analyses. Recently developed contour-based methods for 3D front extraction rely on computing the third derivative of a moist potential temperature field. Additionally, they require the field to be smoothed to obtain continuous large-scale structures. This paper demonstrates the feasibility of an alternative method to front extraction using ridge surface computation. The proposed method requires only the sec- ond derivative of the input field and produces accurate structures even from unsmoothed data. An application of the ridge-based method to a data set corresponding to Cyclone Friederike demonstrates its benefits and utility towards visual analysis of the full 3D structure of fronts.","accessible_pdf":false,"authors":[{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"anne.gossing@fu-berlin.de","is_corresponding":true,"name":"Anne Gossing"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"andreas.beckert@uni-hamburg.de","is_corresponding":false,"name":"Andreas Beckert"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"christoph.fischer-1@uni-hamburg.de","is_corresponding":false,"name":"Christoph Fischer"},{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"klenert@zib.de","is_corresponding":false,"name":"Nicolas Klenert"},{"affiliations":["Indian Institute of Science, Bangalore, India"],"email":"vijayn@iisc.ac.in","is_corresponding":false,"name":"Vijay Natarajan"},{"affiliations":["Freie Universit\u00e4t Berlin, Berlin, Germany"],"email":"george.pacey@fu-berlin.de","is_corresponding":false,"name":"George Pacey"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"thorwin.vogt@uni-hamburg.de","is_corresponding":false,"name":"Thorwin Vogt"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"marc.rautenhaus@uni-hamburg.de","is_corresponding":false,"name":"Marc Rautenhaus"},{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"baum@zib.de","is_corresponding":false,"name":"Daniel Baum"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anne Gossing"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1183","time_end":"","time_stamp":"","time_start":"","title":"A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts","uid":"v-short-1183","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"To improve the perception of hierarchical structures in data sets, several color map generation algorithms have been proposed to take this structure into account. But the design of hierarchical color maps elicits different requirements to those of color maps for tabular data. Within this paper, we make an initial effort to put design rules from the color map literature into the context of hierarchical color maps. We investigate the impact of several design decisions and provide recommendations for various analysis scenarios. Thus, we lay the foundation for objective quality criteria to evaluate hierarchical color maps.","accessible_pdf":false,"authors":[{"affiliations":["Fraunhofer IGD, Darmstadt, Germany"],"email":"tobias.mertz@igd.fraunhofer.de","is_corresponding":true,"name":"Tobias Mertz"},{"affiliations":["Fraunhofer IGD, Darmstadt, Germany","TU Darmstadt, Darmstadt, Germany"],"email":"joern.kohlhammer@igd.fraunhofer.de","is_corresponding":false,"name":"J\u00f6rn Kohlhammer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias Mertz"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1184","time_end":"","time_stamp":"","time_start":"","title":"Towards a Quality Approach to Hierarchical Color Maps","uid":"v-short-1184","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The visualization and interactive exploration of geo-referenced networks poses challenges if the network's nodes are not evenly distributed. Our approach proposes new ways of realizing animated transitions for exploring such networks from an ego-perspective. We aim to reduce the required screen estate while maintaining the viewers' mental map of distances and directions. A preliminary study provides first insights of the comprehensiveness of animated geographic transitions regarding directional relationships between start and end point in different projections. Two use cases showcase how ego-perspective graph exploration can be supported using less screen space than previous approaches.","accessible_pdf":false,"authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"max@mumintroll.org","is_corresponding":true,"name":"Max Franke"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"samuel.beck@vis.uni-stuttgart.de","is_corresponding":false,"name":"Samuel Beck"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"steffen.koch@vis.uni-stuttgart.de","is_corresponding":false,"name":"Steffen Koch"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Max Franke"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1185","time_end":"","time_stamp":"","time_start":"","title":"Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks","uid":"v-short-1185","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data visualizations help extract insights from datasets, but reaching these insights requires decomposing high level goals into low-level analytic tasks that can be complex due to varying degrees of data literacy and visualization experience. Recent advancements in large language models (LLMs) have shown promise for lowering barriers for users to achieve tasks such as writing code and may likewise facilitate visualization insight. Scalable Vector Graphics (SVG), a text-based image format common in data visualizations, matches well with the text sequence processing of transformer-based LLMs. In this paper, we explore the capability of LLMs to perform 10 low-level visual analytic tasks defined by Amar, Eagan, and Stasko directly on SVG-based visualizations. Using zero-shot prompts, we instruct the models to provide responses or modify the SVG code based on given visualizations. Our findings demonstrate that LLMs can effectively modify existing SVG visualizations for some tasks like Cluster but perform poorly on tasks requiring mathematical operations like Compute Derived Value. We also discovered that LLM performance can vary based on factors such as the number of data points, the presence of value labels, and the chart type. Our findings contribute to gauging the general capabilities of LLMs and highlight the need for further exploration and development to fully harness their potential in supporting visual analytic tasks.","accessible_pdf":false,"authors":[{"affiliations":["Brown University, Providence, United States"],"email":"leooooxzz@gmail.com","is_corresponding":true,"name":"Zhongzheng Xu"},{"affiliations":["Emory University, Atlanta, United States"],"email":"emily.wall@emory.edu","is_corresponding":false,"name":"Emily Wall"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zhongzheng Xu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1186","time_end":"","time_stamp":"","time_start":"","title":"Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations","uid":"v-short-1186","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Vortices and their analysis play a critical role in the understanding of complex phenomena in turbulent flow. Traditional vortex extraction methods, notably region-based techniques, often overlook the entanglement phenomenon, resulting in the inclusion of multiple vortices within a single extracted region. Their separation is necessary for quantifying different types of vortices and their statistics. In this study, we propose a novel vortex separation method that extends the conventional contour tree-based segmentation approach with an additional step termed \u201clayering\u201d. Upon extracting a vortical region using specified vortex criteria (e.g., \u03bb2), we initially establish topological segmentation based on the contour tree, followed by the layering process to allocate appropriate segmentation IDs to unsegmented cells, thus separating individual vortices within the region. However, these regions may still suffer from inaccurate splits, which we address statistically by leveraging the continuity of vorticity lines across the split boundaries. Our findings demonstrate a significant improvement in both the separation of vortices and the mitigation of inaccurate splits compared to prior methods.","accessible_pdf":false,"authors":[{"affiliations":["University of Houston, Houston, United States"],"email":"adeelz92@gmail.com","is_corresponding":true,"name":"Adeel Zafar"},{"affiliations":["University of Houston, Houston, United States"],"email":"zpoorsha@cougarnet.uh.edu","is_corresponding":false,"name":"Zahra Poorshayegh"},{"affiliations":["University of Houston, Houston, United States"],"email":"diyang@uh.edu","is_corresponding":false,"name":"Di Yang"},{"affiliations":["University of Houston, Houston, United States"],"email":"chengu@cs.uh.edu","is_corresponding":false,"name":"Guoning Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Adeel Zafar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1188","time_end":"","time_stamp":"","time_start":"","title":"Topological Separation of Vortices","uid":"v-short-1188","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The information visualization research community commonly produces supporting software to demonstrate technical contributions to the field. However, developing this software tends to be an overwhelming task, the final product tends to be a research prototype without much thought for modularization and re-usability which makes it harder to replicate and adopt. This paper presents a design pattern for facilitating the creation, dissemination, and re-utilization of visualization techniques using reactive widgets. The design pattern features basic concepts that leverage modern front-end development best practices and standards, which ease development and replication. The paper presents several usage examples of the pattern, templates for implementation, and even a wrapper for facilitating the conversion of any Vega specification into a reactive widget.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, San Francisco, United States"],"email":"john.guerra@gmail.com","is_corresponding":true,"name":"John Alexis Guerra-Gomez"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["John Alexis Guerra-Gomez"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1189","time_end":"","time_stamp":"","time_start":"","title":"Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination","uid":"v-short-1189","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"To enable data-driven decision-making across organizations, data professionals need to share insights with their colleagues in context-appropriate communication channels. Many of their colleagues rely on data but are not themselves analysts; furthermore, their colleagues are reluctant or unable to use dedicated analytical applications or dashboards, and they expect communication to take place within threaded collaboration platforms such as Slack or Microsoft Teams. In this paper, we introduce a set of six strategies for adapting content from business intelligence (BI) dashboards into appropriate formats for sharing on collaboration platforms, formats that we refer to as dashboard snapshots. Informed by prior studies of enterprise communication around data, these strategies go beyond redesigning or restyling by considering varying levels of data literacy across an organization, introducing affordances for self-service question-answering, and anticipating the post-sharing lifecycle of data artifacts. These strategies involve the use of templates that are matched to common communicative intents, serving to reduce the workload of data professionals. We contribute a formal representation of these strategies and demonstrate their applicability in a comprehensive enterprise communication scenario featuring multiple stakeholders that unfolds over the span of months.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"hyeokkim2024@u.northwestern.edu","is_corresponding":true,"name":"Hyeok Kim"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"arjun.srinivasan.10@gmail.com","is_corresponding":false,"name":"Arjun Srinivasan"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"mbrehmer@uwaterloo.ca","is_corresponding":false,"name":"Matthew Brehmer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hyeok Kim"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1191","time_end":"","time_stamp":"","time_start":"","title":"Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms","uid":"v-short-1191","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Narrative visualization has become a crucial tool in data presentation, merging storytelling with data visualization to convey complex information in an engaging and accessible manner. In this study, we review the design space for narrative visualizations, focusing on animation style, through a comprehensive analysis of 71 papers from key visualization venues. We categorize these papers into six broad themes: Animation Style, Interactivity, Technology Usage, Methodology Development, Evaluation Type, and Application Domain. Our findings reveal a significant evolution in the field, marked by a growing preference for animated and non-interactive techniques. This trend reflects a shift towards minimizing user interaction while enhancing the clarity and impact of data presentation. We also identified key trends and technologies that have shaped the field, highlighting the role of technologies, such as machine learning in driving these changes. We offer insights into the dynamic interrelations within the narrative visualization domain, suggesting a future research trajectory that balances interactivity with automated tools to foster increased engagement. Our work lays the groundwork for future approaches for effective and innovative narrative visualization in diverse applications.","accessible_pdf":false,"authors":[{"affiliations":["Louisiana State University, Baton Rouge, United States"],"email":"jyang44@lsu.edu","is_corresponding":true,"name":"Vyri Junhan Yang"},{"affiliations":["Louisiana State University, Baton Rouge, United States"],"email":"mjasim@lsu.edu","is_corresponding":false,"name":"Mahmood Jasim"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Vyri Junhan Yang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1192","time_end":"","time_stamp":"","time_start":"","title":"Animating the Narrative: A Review of Animation Styles in Narrative Visualization","uid":"v-short-1192","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present LinkQ, a system that leverages a large language model (LLM) to facilitate knowledge graph (KG) query construction through natural language question-answering. Traditional approaches often require detailed knowledge of complex graph querying languages, limiting the ability for users -- even experts -- to acquire valuable insights from KG data. LinkQ simplifies this process by first interpreting a user's question, then converting it into a well-formed KG query. By using the LLM to construct a query instead of directly answering the user's question, LinkQ guards against the LLM hallucinating or generating false, erroneous information. By integrating an LLM into LinkQ, users are able to conduct both exploratory and confirmatory data analysis, with the LLM helping to iteratively refine open-ended questions into precise ones. To demonstrate the efficacy of LinkQ, we conducted a qualitative study with five KG practitioners and distill their feedback. Our results indicate that practitioners find LinkQ effective for KG question-answering, and desire future LLM-assisted systems for the exploratory analysis of graph databases.","accessible_pdf":false,"authors":[{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"harry.li@ll.mit.edu","is_corresponding":true,"name":"Harry Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"ashley.suh@ll.mit.edu","is_corresponding":false,"name":"Ashley Suh"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Harry Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1193","time_end":"","time_stamp":"","time_start":"","title":"LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering","uid":"v-short-1193","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In the digital landscape, the ubiquity of data visualizations in media underscores the necessity for accessibility to ensure inclusivity for all users, including those with visual impairments. Current visual content often fails to cater to the needs of screen reader users due to the absence of comprehensive textual descriptions. To address this gap, we propose in this paper a framework designed to empower media content creators to transform charts into descriptive narratives. This tool not only facilitates the understanding of complex visual data through text but also fosters a broader awareness of accessibility in digital content creation. Through the application of this framework, users can interpret and convey the insights of data visualizations more effectively, accommodating a diverse audience. Our evaluations reveal that this tool not only enhances the comprehension of data visualizations but also promotes new perspectives on the represented data, thereby broadening the interpretative possibilities for all users.","accessible_pdf":false,"authors":[{"affiliations":["Polytechnique Montr\u00e9al, Montr\u00e9al, Canada"],"email":"qiangxu1204@gmail.com","is_corresponding":true,"name":"Qiang Xu"},{"affiliations":["Polytechnique Montreal, Montreal, Canada"],"email":"thomas.hurtut@polymtl.ca","is_corresponding":false,"name":"Thomas Hurtut"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qiang Xu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1199","time_end":"","time_stamp":"","time_start":"","title":"From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions","uid":"v-short-1199","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"An essential task of an air traffic controller is to manage the traffic flow by predicting future trajectories. Complex traffic patterns are difficult to predict and manage and impose cognitive load on the air traffic controllers. In this work we present an interactive visual analytics interface which facilitates detection and resolution of complex traffic patterns for air traffic controllers. The interface supports the users in detecting complex clusters of aircraft and uses visual representations to communicate to the controllers how and propose re-routing. The interface further enables the ATCos to visualize and simultaneously compare how different re-routing strategies for each individual aircraft yield reduction of complexity in the entire sector for the next hour. The development of the concepts was supported by the domain-specific feedback we received from six fully licensed and operational air traffic controllers in an iterative design process over a period of 14 months.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"elmira.zohrevandi@liu.se","is_corresponding":true,"name":"Elmira Zohrevandi"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"katerina.vrotsou@liu.se","is_corresponding":false,"name":"Katerina Vrotsou"},{"affiliations":["Institute of Science and Technology, Norrk\u00f6ping, Sweden","Institute of Science and Technology, Norrk\u00f6ping, Sweden"],"email":"carl.westin@liu.se","is_corresponding":false,"name":"Carl A. L. Westin"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"jonas.lundberg@liu.se","is_corresponding":false,"name":"Jonas Lundberg"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"anders.ynnerman@liu.se","is_corresponding":false,"name":"Anders Ynnerman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Elmira Zohrevandi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1207","time_end":"","time_stamp":"","time_start":"","title":"Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity","uid":"v-short-1207","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Transfer function design is crucial in volume rendering, as it directly influences the visual representation and interpretation of volumetric data. However, creating effective transfer functions that align with users\u2019 visual objectives is often challenging due to the complex parameter space and the semantic gap between transfer function values and features of interest within the volume. In this work, we propose a novel approach that leverages recent advancements in language-vision models to bridge this semantic gap. By employing a fully differentiable rendering pipeline and an image-based loss function guided by language descriptions, our method generates transfer functions that yield volume-rendered images closely matching the user\u2019s intent. We demonstrate the effectiveness of our approach in creating meaningful transfer functions from simple descriptions, empowering users to intuitively express their desired visual outcomes with minimal effort. This advancement streamlines the transfer function design process and makes volume rendering more accessible to a broader range of users.","accessible_pdf":false,"authors":[{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"sangwon.jeong@vanderbilt.edu","is_corresponding":true,"name":"Sangwon Jeong"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["Lawrence Livermore National Laboratory , Livermore, United States"],"email":"shusenl@sci.utah.edu","is_corresponding":false,"name":"Shusen Liu"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"matthew.berger@vanderbilt.edu","is_corresponding":false,"name":"Matthew Berger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sangwon Jeong"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1211","time_end":"","time_stamp":"","time_start":"","title":"Text-based transfer function design for semantic volume rendering","uid":"v-short-1211","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Diffusion-based generative models\u2019 impressive ability to create convincing images has garnered global attention. However, their complex structures and operations often pose challenges for non-experts to grasp. We present Diffusion Explainer, the first interactive visualization tool that explains how Stable Diffusion transforms text prompts into images. Diffusion Explainer tightly integrates a visual overview of Stable Diffusion\u2019s complex structure with explanations of the underlying operations. By comparing image generation of prompt variants, users can discover the impact of keyword changes on image generation. A 56-participant user study demonstrates that Diffusion Explainer offers substantial learning benefits to non-experts. Our tool has been used by over 10,300 users from 124 countries at https://poloclub.github.io/diffusion-explainer/.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"seongmin@gatech.edu","is_corresponding":true,"name":"Seongmin Lee"},{"affiliations":["GA Tech, Atlanta, United States","IBM Research AI, Cambridge, United States"],"email":"benjamin.hoover@ibm.com","is_corresponding":false,"name":"Benjamin Hoover"},{"affiliations":["IBM Research AI, Cambridge, United States"],"email":"hendrik@strobelt.com","is_corresponding":false,"name":"Hendrik Strobelt"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"jayw@gatech.edu","is_corresponding":false,"name":"Zijie J. Wang"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"speng65@gatech.edu","is_corresponding":false,"name":"ShengYun Peng"},{"affiliations":["Georgia Institute of Technology , Atlanta , United States"],"email":"apwright@gatech.edu","is_corresponding":false,"name":"Austin P Wright"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"kevin.li@gatech.edu","is_corresponding":false,"name":"Kevin Li"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"haekyu@gatech.edu","is_corresponding":false,"name":"Haekyu Park"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"alexanderyang@gatech.edu","is_corresponding":false,"name":"Haoyang Yang"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"polo@gatech.edu","is_corresponding":false,"name":"Duen Horng (Polo) Chau"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Seongmin Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1224","time_end":"","time_stamp":"","time_start":"","title":"Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion","uid":"v-short-1224","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A high number of samples often leads to occlusion in scatterplots, which hinders data perception and analysis. De-cluttering approaches based on spatial transformation reduce visual clutter by remapping samples using the entire available scatterplot domain. Such regularized scatterplots may still be used for data analysis tasks, if the spatial transformation is smooth and preserves the original neighborhood relations of samples. Recently, Rave et al. proposed an efficient regularization method based on integral images. We propose a generalization of their regularization scheme using sector-based transformations with the aim of increasing sample uniformity of the resulting scatterplot. We document the improvement of our approach using various uniformity measures.","accessible_pdf":false,"authors":[{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"hennes.rave@uni-muenster.de","is_corresponding":true,"name":"Hennes Rave"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"molchano@uni-muenster.de","is_corresponding":false,"name":"Vladimir Molchanov"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"linsen@uni-muenster.de","is_corresponding":false,"name":"Lars Linsen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hennes Rave"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1235","time_end":"","time_stamp":"","time_start":"","title":"Uniform Sample Distribution in Scatterplots via Sector-based Transformation","uid":"v-short-1235","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Automatically generating data visualizations in response to human utterances on datasets necessitates a deep semantic understanding of the data utterance, including implicit and explicit references to data attributes, visualization tasks, and necessary data preparation steps. Natural Language Interfaces (NLIs) for data visualization have explored ways to infer such information, yet challenges persist due to inherent uncertainty in human speech. Recent advances in Large Language Models (LLMs) provide an avenue to address these challenges, but their ability to extract the relevant semantic information remains unexplored. In this study, we evaluate four publicly available LLMs (GPT-4, Gemini-Pro, Llama3, and Mixtral), investigating their ability to comprehend utterances even in the presence of uncertainty and identify the relevant data context and visual tasks. Our findings reveal that LLMs are sensitive to uncertainties in utterance. Despite this sensitivity, they are able to extract the relevant data context. However, LLMs struggle with inferring visualization tasks. Based on these results, we highlight future research directions on using LLMs for visualization generation. Our supplementary materials have been shared on OSF: https://osf.io/j342a/wiki/home/?view_only=b4051ffc6253496d9bce818e4a89b9f9","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"hbako@umd.edu","is_corresponding":true,"name":"Hannah K. Bako"},{"affiliations":["University of Maryland, College Park, United States"],"email":"arshnoorbhutani8@gmail.com","is_corresponding":false,"name":"Arshnoor Bhutani"},{"affiliations":["The University of Texas at Austin, Austin, United States"],"email":"xinyi.liu@utexas.edu","is_corresponding":false,"name":"Xinyi Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"kcobbina@cs.umd.edu","is_corresponding":false,"name":"Kwesi Adu Cobbina"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hannah K. Bako"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1236","time_end":"","time_stamp":"","time_start":"","title":"Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization","uid":"v-short-1236","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Statistical practices such as building regression models or running hypothesis tests rely on following rigorous procedures of steps and verifying assumptions on data to produce valid results. However, common statistical tools do not verify users\u2019 decision choices and provide low-level statistical functions without instructions on the whole analysis practice. Users can easily misuse analysis methods, potentially decreasing the validity of results. To address this problem, we introduce GuidedStats, an interactive interface within computational notebooks that encapsulates guidance, models, visualization, and exportable results into interactive workflows. It breaks down typical analysis processes, such as linear regression and two-sample T-tests, into interactive steps supplemented with automatic visualizations and explanations for step-wise evaluation. Users can iterate on input choices to refine their models, while recommended actions and exports allow the user to continue their analysis in code. Case studies show how GuidedStats offers valuable instructions for conducting fluid statistical analyses while finding possible assumption violations in the underlying data, supporting flexible and accurate statistical analyses.","accessible_pdf":false,"authors":[{"affiliations":["New York University, New York, United States"],"email":"yz9381@nyu.edu","is_corresponding":true,"name":"Yuqi Zhang"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"adamperer@cmu.edu","is_corresponding":false,"name":"Adam Perer"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"willepp@cmu.edu","is_corresponding":false,"name":"Will Epperson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuqi Zhang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1248","time_end":"","time_stamp":"","time_start":"","title":"Guided Statistical Workflows with Interactive Explanations and Assumption Checking","uid":"v-short-1248","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The Local Moran's I statistic is a valuable tool for identifying localized patterns of spatial autocorrelation. Understanding these patterns is crucial in spatial analysis, but interpreting the statistic can be difficult. To simplify this process, we introduce three novel visualizations that enhance the interpretation of Local Moran's I results. These visualizations can be interactively linked to one another, and to established visualizations, to offer a more holistic exploration of the results. We provide a JavaScript library with implementations of these new visual elements, along with a web dashboard that demonstrates their integrated use.","accessible_pdf":false,"authors":[{"affiliations":["NIH, Rockville, United States","Queen's University, Belfast, United Kingdom"],"email":"masonlk@nih.gov","is_corresponding":true,"name":"Lee Mason"},{"affiliations":["Queen's University Belfast , Belfast , United Kingdom"],"email":"b.hicks@qub.ac.uk","is_corresponding":false,"name":"Bl\u00e1naid Hicks"},{"affiliations":["National Institutes of Health, Rockville, United States"],"email":"jonas.dealmeida@nih.gov","is_corresponding":false,"name":"Jonas S Almeida"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lee Mason"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1264","time_end":"","time_stamp":"","time_start":"","title":"Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation","uid":"v-short-1264","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This study examines the impact of positive and negative contrast polarities (i.e., light and dark modes) on the performance of younger adults and people in their late adulthood (PLA). In a crowdsourced study with 134 participants (69 below age 60, 66 aged 60 and above), we assessed their accuracy and time performing analysis tasks across three common visualization types (Bar, Line, Scatterplot) and two contrast polarities (positive and negative). We observed that, across both age groups, the polarity that led to better performance and the resulting amount of improvement varied on an individual basis, with each polarity benefiting comparable proportions of participants. Additionally, we observed that the choice of contrast polarity can have an impact on time similar to that of the choice of visualization type, resulting in an average percent difference of around 36%. These findings indicate that, overall, the effects of contrast polarity on visual analysis performance do not noticeably change with age. Furthermore, they underscore the importance of making visualizations available in both contrast polarities to better-support a broad audience with differing needs.","accessible_pdf":false,"authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"zwhile@cs.umass.edu","is_corresponding":true,"name":"Zack While"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"asarv@cs.umass.edu","is_corresponding":false,"name":"Ali Sarvghad"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zack While"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1274","time_end":"","time_stamp":"","time_start":"","title":"Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups","uid":"v-short-1274","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Machine Learning models for chart-grounded Q&A (CQA) often treat charts as images, but performing CQA on pixel values has proven challenging. We thus investigate a resource overlooked by current ML-based approaches: the declarative documents describing how charts should visually encode data (i.e., chart specifications). In this work, we use chart specifications to enhance language models (LMs) for chart-reading tasks, such that the resulting system can robustly understand language for CQA. Through a case study with 359 bar charts, we test novel fine tuning schemes on both GPT-3 and T5 using a new dataset curated for two CQA tasks: question-answering and visual explanation generation. Our text-only approaches strongly outperform vision-based GPT-4 on explanation generation (99% vs. 63% accuracy), and show promising results for question-answering (57-67% accuracy). Through in-depth experiments, we also show that our text-only approaches are mostly robust to natural language variation.","accessible_pdf":false,"authors":[{"affiliations":["Adobe Research, San Jose, United States"],"email":"victorbursztyn2022@u.northwestern.edu","is_corresponding":true,"name":"Victor S. Bursztyn"},{"affiliations":["Adobe Research, Seattle, United States"],"email":"jhoffs@adobe.com","is_corresponding":false,"name":"Jane Hoffswell"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"sguo@adobe.com","is_corresponding":false,"name":"Shunan Guo"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"eunyee@adobe.com","is_corresponding":false,"name":"Eunyee Koh"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Victor S. Bursztyn"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1276","time_end":"","time_stamp":"","time_start":"","title":"Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts","uid":"v-short-1276","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Trust is a subjective yet fundamental component of human-computer interaction, and is a determining factor in shaping the efficacy of data visualizations. Prior research has identified five dimensions of trust assessment in visualizations (credibility, clarity, reliability, familiarity, and confidence), and observed that these dimensions tend to vary predictably along with certain features of the visualization being evaluated. This raises a further question: how do the design features driving viewers' trust assessment vary with the characteristics of the viewers themselves? By reanalyzing data from these studies through the lens of individual differences, we build a more detailed map of the relationships between design features, individual characteristics, and trust behaviors. In particular, we model the distinct contributions of endogenous design features (such as visualization type, or the use of color) and exogenous user characteristics (such as visualization literacy), as well as the interactions between them. We then use these findings to make recommendations for individualized and adaptive visualization design.","accessible_pdf":false,"authors":[{"affiliations":["Smith College, Northampton, United States"],"email":"jcrouser@smith.edu","is_corresponding":true,"name":"R. Jordan Crouser"},{"affiliations":["Smith College, Northampton, United States"],"email":"cmatoussi@smith.edu","is_corresponding":false,"name":"Syrine Matoussi"},{"affiliations":["Smith College, Northampton, United States"],"email":"ekung@smith.edu","is_corresponding":false,"name":"Lan Kung"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"p.saugat@wustl.edu","is_corresponding":false,"name":"Saugat Pandey"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"m.oen@wustl.edu","is_corresponding":false,"name":"Oen G McKinley"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"alvitta@wustl.edu","is_corresponding":false,"name":"Alvitta Ottley"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["R. Jordan Crouser"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1277","time_end":"","time_stamp":"","time_start":"","title":"Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization","uid":"v-short-1277","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This study examines the impact of social-comparison risk visualizations on public health communication, comparing the effects of traditional bar charts against alternative jitter plots emphasizing geographic variability (geo jitter). The research highlights that whereas both visualization types increased perceived vulnerability, behavioral intent, and policy support, the geo jitter plots were significantly more effective in reducing unjustified personal attributions. Importantly, the findings also underscore the emotional challenges faced by visualization viewers from marginalized communities, indicating a need for designs that are sensitive to the potential for reinforcing stereotypes or eliciting negative emotions. This work suggests a strategic reevaluation of visual communication tools in public health to enhance understanding and engagement without contributing to negative attributions or emotional distress.","accessible_pdf":false,"authors":[{"affiliations":["3iap, Raleigh, United States"],"email":"eli@3iap.com","is_corresponding":false,"name":"Eli Holder"},{"affiliations":["Northeastern University, Boston, United States","University of California Merced, Merced, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":true,"name":"Lace M. Padilla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lace M. Padilla"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1285","time_end":"","time_stamp":"","time_start":"","title":"\"Must Be a Tuesday\": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities","uid":"v-short-1285","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Collaborative planning for congenital heart diseases typically involves creating physical heart models through 3D printing, which are then examined by both surgeons and cardiologists. Recent developments in mobile augmented reality (AR) technologies have presented a viable alternative, known for their ease of use and portability. However, there is still a lack of research examining the utilization of multi-user mobile AR environments to support collaborative planning for cardiovascular surgeries. We created ARCollab, an iOS AR app designed for enabling multiple surgeons and cardiologists to interact with a patient's 3D heart model in a shared environment. ARCollab enables surgeons and cardiologists to import heart models, manipulate them through gestures and collaborate with other users, eliminating the need for fabricating physical heart models. Our evaluation of ARCollab's usability and usefulness in enhancing collaboration, conducted with three cardiothoracic surgeons and two cardiologists, marks the first human evaluation of a multi-user mobile AR tool for surgical planning. ARCollab is open-source, available at https://github.com/poloclub/arcollab.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"pratham.mehta001@gmail.com","is_corresponding":true,"name":"Pratham Darrpan Mehta"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"rnarayanan39@gatech.edu","is_corresponding":false,"name":"Rahul Ozhur Narayanan"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"harsha5431@gmail.com","is_corresponding":false,"name":"Harsha Karanth"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"alexanderyang@gatech.edu","is_corresponding":false,"name":"Haoyang Yang"},{"affiliations":["Emory University, Atlanta, United States"],"email":"slesnickt@kidsheart.com","is_corresponding":false,"name":"Timothy C Slesnick"},{"affiliations":["Emory University/Children's Healthcare of Atlanta, Atlanta, United States"],"email":"fawwaz.shaw@choa.org","is_corresponding":false,"name":"Fawwaz Shaw"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"polo@gatech.edu","is_corresponding":false,"name":"Duen Horng (Polo) Chau"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Pratham Darrpan Mehta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1292","time_end":"","time_stamp":"","time_start":"","title":"Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning","uid":"v-short-1292","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Reactionary delay'' is a result of the accumulated cascading effects of knock-on train delays. It is becoming an increasing problem as shared railway infrastructure becomes more crowded. The chaotic nature of its effects is notoriously hard to predict. We use a stochastic Monte-Carto-style simulation of reactionary delay that produces whole distributions of likely reactionary delay. Our contribution is the demonstrating how Zoomable GlyphTables -- case-by-variable tables in which cases are rows, variables are columns, variables are complex composite metrics that incorporate distributions, and cells contain mini-charts that depict these as different level of detail through zoom interaction -- help interpret these results for helping understanding the causes and effects of reactionary delay and how they have been informing timetable robustness testing and tweaking. We describe our design principles, demonstrate how this supported our analytical tasks and we reflect on wider potential for Zoomable GlyphTables to be used more widely.","accessible_pdf":false,"authors":[{"affiliations":["City, University of London, London, United Kingdom"],"email":"a.slingsby@city.ac.uk","is_corresponding":true,"name":"Aidan Slingsby"},{"affiliations":["Risk Solutions, Warrington, United Kingdom"],"email":"jonathan.hyde@risksol.co.uk","is_corresponding":false,"name":"Jonathan Hyde"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Aidan Slingsby"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1301","time_end":"","time_stamp":"","time_start":"","title":"Zoomable Glyph Tables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays","uid":"v-short-1301","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"Short Papers","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"v-siggraph":{"event":"SIGGRAPH Invited Partnership Presentations","event_description":"","event_prefix":"v-siggraph","event_type":"invited","event_url":"","long_name":"SIGGRAPH Invited Partnership Presentations","organizers":[],"sessions":[]},"v-spotlights":{"event":"Application Spotlights","event_description":"","event_prefix":"v-spotlights","event_type":"application","event_url":"","long_name":"Application Spotlights","organizers":[],"sessions":[]},"v-tvcg":{"event":"TVCG Invited Presentations","event_description":"","event_prefix":"v-tvcg","event_type":"invited","event_url":"","long_name":"TVCG Invited Presentations","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"v-tvcg","ff_link":"","session_id":"tvcg0","session_image":"tvcg0.png","time_end":"","time_slots":[{"abstract":"Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sungwon In"],"doi":"10.1109/TVCG.2023.3299602","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Immersive Analytics, Data Transformation, Data Science, Interaction, Empirical Study, Virtual/Augmented/Mixed Reality"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233299602","time_end":"","time_stamp":"","time_start":"","title":"This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality","uid":"v-tvcg-20233299602","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The dynamic network visualization design space consists of two major dimensions: network structural and temporal representation. As more techniques are developed and published, a clear need for evaluation and experimental comparisons between them emerges. Most studies explore the temporal dimension and diverse interaction techniques supporting the participants, focusing on a single structural representation. Empirical evidence about performance and preference for different visualization approaches is scattered over different studies, experimental settings, and tasks. This paper aims to comprehensively investigate the dynamic network visualization design space in two evaluations. First, a controlled study assessing participants' response times, accuracy, and preferences for different combinations of network structural and temporal representations on typical dynamic network exploration tasks, with and without the support of standard interaction methods. Second, the best-performing combinations from the first study are enhanced based on participants' feedback and evaluated in a heuristic-based qualitative study with visualization experts on a real-world network. Our results highlight node-link with animation and playback controls as the best-performing combination and the most preferred based on ratings. Matrices achieve similar performance to node-link in the first study but have considerably lower scores in our second evaluation. Similarly, juxtaposition exhibits evident scalability issues in more realistic analysis contexts.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Velitchko Filipov"],"doi":"10.1109/TVCG.2023.3310019","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233310019","time_end":"","time_stamp":"","time_start":"","title":"On Network Structural and Temporal Encodings: A Space and Time Odyssey","uid":"v-tvcg-20233310019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We visualize the predictions of multiple machine learning models to help biologists as they interactively make decisions about cell lineage---the development of a (plant) embryo from a single ovum cell. Based on a confocal microscopy dataset, traditionally biologists manually constructed the cell lineage, starting from this observation and reasoning backward in time to establish their inheritance. To speed up this tedious process, we make use of machine learning (ML) models trained on a database of manually established cell lineages to assist the biologist in cell assignment. Most biologists, however, are not familiar with ML, nor is it clear to them which model best predicts the embryo's development. We thus have developed a visualization system that is designed to support biologists in exploring and comparing ML models, checking the model predictions, detecting possible ML model mistakes, and deciding on the most likely embryo development. To evaluate our proposed system, we deployed our interface with six biologists in an observational study. Our results show that the visual representations of machine learning are easily understandable, and our tool, LineageD+, could potentially increase biologists' working efficiency and enhance the understanding of embryos.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jiayi Hong"],"doi":"10.1109/TVCG.2023.3302308","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, visual analytics, machine learning, comparing ML predictions, human-AI teaming, plant biology, cell lineage"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233302308","time_end":"","time_stamp":"","time_start":"","title":"Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage","uid":"v-tvcg-20233302308","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A contiguous area cartogram is a geographic map in which the area of each region is proportional to numerical data (e.g., population size) while keeping neighboring regions connected. In this study, we investigated whether value-to-area legends (square symbols next to the values represented by the squares' areas) and grid lines aid map readers in making better area judgments. We conducted an experiment to determine the accuracy, speed, and confidence with which readers infer numerical data values for the mapped regions. We found that, when only informed about the total numerical value represented by the whole cartogram without any legend, the distribution of estimates for individual regions was centered near the true value with substantial spread. Legends with grid lines significantly reduced the spread but led to a tendency to underestimate the values. Comparing differences between regions or between cartograms revealed that legends and grid lines slowed the estimation without improving accuracy. However, participants were more likely to complete the tasks when legends and grid lines were present, particularly when the area units represented by these features could be interactively selected. We recommend considering the cartogram's use case and purpose before deciding whether to include grid lines or an interactive legend.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Gastner"],"doi":"10.1109/TVCG.2023.3275925","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Task Analysis, Symbols, Data Visualization, Sociology, Visualization, Switches, Mice, Cartogram, Geovisualization, Interactive Data Exploration, Quantitative Evaluation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233275925","time_end":"","time_stamp":"","time_start":"","title":"Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms","uid":"v-tvcg-20233275925","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Cindy Xiong Bearfield"],"doi":"10.1109/TVCG.2023.3289292","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["comparison, perception, visual grouping, bar charts, verbal conclusions."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233289292","time_end":"","time_stamp":"","time_start":"","title":"What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts","uid":"v-tvcg-20233289292","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Automated visualization recommendation facilitates the rapid creation of effective visualizations, which is especially beneficial for users with limited time and limited knowledge of data visualization. There is an increasing trend in leveraging machine learning (ML) techniques to achieve an end-to-end visualization recommendation. However, existing ML-based approaches implicitly assume that there is only one appropriate visualization for a specific dataset, which is often not true for real applications. Also, they often work like a black box, and are difficult for users to understand the reasons for recommending specific visualizations. To fill the research gap, we propose AdaVis, an adaptive and explainable approach to recommend one or multiple appropriate visualizations for a tabular dataset. It leverages a box embedding-based knowledge graph to well model the possible one-to-many mapping relations among different entities (i.e., data features, dataset columns, datasets, and visualization choices). The embeddings of the entities and relations can be learned from dataset-visualization pairs. Also, AdaVis incorporates the attention mechanism into the inference framework. Attention can indicate the relative importance of data features for a dataset and provide fine-grained explainability. Our extensive evaluations through quantitative metric evaluations, case studies, and user interviews demonstrate the effectiveness of AdaVis.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Songheng Zhang"],"doi":"10.1109/TVCG.2023.3316469","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization Recommendation, Logical Reasoning, Data Visualization, Knowledge Graph"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233316469","time_end":"","time_stamp":"","time_start":"","title":"AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'","uid":"v-tvcg-20233316469","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization linting is a proven effective tool in assisting users to follow established visualization guidelines. Despite its success, visualization linting for choropleth maps, one of the most popular visualizations on the internet, has yet to be investigated. In this paper, we present GeoLinter, a linting framework for choropleth maps that assists in creating accurate and robust maps. Based on a set of design guidelines and metrics drawing upon a collection of best practices from the cartographic literature, GeoLinter detects potentially suboptimal design decisions and provides further recommendations on design improvement with explanations at each step of the design process. We perform a validation study to evaluate the proposed framework's functionality with respect to identifying and fixing errors and apply its results to improve the robustness of GeoLinter. Finally, we demonstrate the effectiveness of the GeoLinter - validated through empirical studies - by applying it to a series of case studies using real-world datasets.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arlen Fan"],"doi":"10.1109/TVCG.2023.3322372","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization , Image color analysis , Geology , Recommender systems , Guidelines , Bars , Visualization Author Keywords: Automated visualization design , choropleth maps , visualization linting , visualization recommendation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233322372","time_end":"","time_stamp":"","time_start":"","title":"GeoLinter: A Linting Framework for Choropleth Maps","uid":"v-tvcg-20233322372","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Researchers have derived many theoretical models for specifying users\u2019 insights as they interact with a visualization system. These representations are essential for understanding the insight discovery process, such as when inferring user interaction patterns that lead to insight or assessing the rigor of reported insights. However, theoretical models can be difficult to apply to existing tools and user studies, often due to discrepancies in how insight and its constituent parts are defined. This paper calls attention to the consistent structures that recur across the visualization literature and describes how they connect multiple theoretical representations of insight. We synthesize a unified formalism for insights using these structures, enabling a wider audience of researchers and developers to adopt the corresponding models. Through a series of theoretical case studies, we use our formalism to compare and contrast existing theories, revealing interesting research challenges in reasoning about a user's domain knowledge and leveraging synergistic approaches in data mining and data management research.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leilani Battle"],"doi":"10.1109/TVCG.2023.3326698","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233326698","time_end":"","time_stamp":"","time_start":"","title":"What Do We Mean When We Say \u201cInsight\u201d? A Formal Synthesis of Existing Theory","uid":"v-tvcg-20233326698","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents a computational framework for the concise encoding of an ensemble of persistence diagrams, in the form of weighted Wasserstein barycenters [100], [102] of a dictionary of atom diagrams. We introduce a multi-scale gradient descent approach for the efficient resolution of the corresponding minimization problem, which interleaves the optimization of the barycenter weights with the optimization of the atom diagrams. Our approach leverages the analytic expressions for the gradient of both sub-problems to ensure fast iterations and it additionally exploits shared-memory parallelism. Extensive experiments on public ensembles demonstrate the efficiency of our approach, with Wasserstein dictionary computations in the orders of minutes for the largest examples. We show the utility of our contributions in two applications. First, we apply Wassserstein dictionaries to data reduction and reliably compress persistence diagrams by concisely representing them with their weights in the dictionary. Second, we present a dimensionality reduction framework based on a Wasserstein dictionary defined with a small number of atoms (typically three) and encode the dictionary as a low dimensional simplex embedded in a visual space (typically in 2D). In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Julien Tierny"],"doi":"10.1109/TVCG.2023.3330262","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Topological data analysis, ensemble data, persistence diagrams"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233330262","time_end":"","time_stamp":"","time_start":"","title":"Wasserstein Dictionaries of Persistence Diagrams","uid":"v-tvcg-20233330262","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present Submerse, an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Saeed Boorboor"],"doi":"10.1109/TVCG.2023.3332511","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Camera navigation, flooding simulation visualization, immersive visualization, mixed reality"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233332511","time_end":"","time_stamp":"","time_start":"","title":"Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies","uid":"v-tvcg-20233332511","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization design studies bring together visualization researchers and domain experts to address yet unsolved data analysis challenges stemming from the needs of the domain experts. Typically, the visualization researchers lead the design study process and implementation of any visualization solutions. This setup leverages the visualization researchers' knowledge of methodology, design, and programming, but the availability to synchronize with the domain experts can hamper the design process. We consider an alternative setup where the domain experts take the lead in the design study, supported by the visualization experts. In this study, the domain experts are computer architecture experts who simulate and analyze novel computer chip designs. These chips rely on a Network-on-Chip (NOC) to connect components. The experts want to understand how the chip designs perform and what in the design led to their performance. To aid this analysis, we develop Vis4Mesh, a visualization system that provides spatial, temporal, and architectural context to simulated NOC behavior. Integration with an existing computer architecture visualization tool enables architects to perform deep-dives into specific architecture component behavior. We validate Vis4Mesh through a case study and a user study with computer architecture researchers. We reflect on our design and process, discussing advantages, disadvantages, and guidance for engaging in a domain expert-led design studies.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yifan Sun"],"doi":"10.1109/TVCG.2023.3337173","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Design Study, Network-on-Chip, Performance Analysis"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233337173","time_end":"","time_stamp":"","time_start":"","title":"Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study","uid":"v-tvcg-20233337173","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visual and interactive machine learning systems (IML) are becoming ubiquitous as they empower individuals with varied machine learning expertise to analyze data. However, it remains complex to align interactions with visual marks to a user\u2019s intent for steering machine learning models. We explore using data and visual design probes to elicit users\u2019 desired interactions to steer ML models via visual encodings within IML interfaces. We conducted an elicitation study with 20 data analysts with varying expertise in ML. We summarize our findings as pairs of target-interaction, which we compare to prior systems to assess the utility of the probes. We additionally surfaced insights about factors influencing how and why participants chose to interact with visual encodings, including refraining from interacting. Finally, we reflect on the value of gathering such formative empirical evidence via data and visual design probes ahead of developing IML prototypes. ","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anamaria Crisan"],"doi":"10.1109/TVCG.2023.3322898","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Design Probes, Interactive Machine Learning, Model Steering, Semantic Interaction"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233322898","time_end":"","time_stamp":"","time_start":"","title":"Eliciting Model Steering Interactions from Users via Data and Visual Design Probes","uid":"v-tvcg-20233322898","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper investigates the role of text in visualizations, specifically the impact of text position, semantic content, and biased wording. Two empirical studies were conducted based on two tasks (predicting data trends and appraising bias) using two visualization types (bar and line charts). While the addition of text had a minimal effect on how people perceive data trends, there was a significant impact on how biased they perceive the authors to be. This finding revealed a relationship between the degree of bias in textual information and the perception of the authors' bias. Exploratory analyses support an interaction between a person's prediction and the degree of bias they perceived. This paper also develops a crowdsourced method for creating chart annotations that range from neutral to highly biased. This research highlights the need for designers to mitigate potential polarization of readers' opinions based on how authors' ideas are expressed.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chase Stokes"],"doi":"10.1109/TVCG.2023.3338451","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, text, annotation, perceived bias, judgment, prediction"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233338451","time_end":"","time_stamp":"","time_start":"","title":"The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions","uid":"v-tvcg-20233338451","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Molecular docking is a key technique in various fields like structural biology, medicinal chemistry, and biotechnology. It is widely used for virtual screening during drug discovery, computer-assisted drug design, and protein engineering. A general molecular docking process consists of the target and ligand selection, their preparation, and the docking process itself, followed by the evaluation of the results. However, the most commonly used docking software provides no or very basic evaluation possibilities. Scripting and external molecular viewers are often used, which are not designed for an efficient analysis of docking results. Therefore, we developed InVADo, a comprehensive interactive visual analysis tool for large docking data. It consists of multiple linked 2D and 3D views. It filters and spatially clusters the data, and enriches it with post-docking analysis results of protein-ligand interactions and functional groups, to enable well-founded decision-making. In an exemplary case study, domain experts confirmed that InVADo facilitates and accelerates the analysis workflow. They rated it as a convenient, comprehensive, and feature-rich tool, especially useful for virtual screening.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Krone"],"doi":"10.1109/TVCG.2023.3337642","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Molecular Docking, AutoDock, Virtual Screening, Visual Analysis, Visualization, Clustering, Protein-Ligand Interaction."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233337642","time_end":"","time_stamp":"","time_start":"","title":"InVADo: Interactive Visual Analysis of Molecular Docking Data","uid":"v-tvcg-20233337642","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for compressing large-scale time-varying data. The approach consists of two stages: spatial compression and model aggregation. In the first stage, each time step is compressed using an implicit neural representation with bottleneck layers and features of interest preservation-based sampling. In the second stage, we utilize an offline knowledge distillation algorithm to extract knowledge from the trained models and aggregate it into a single model. We evaluated our approach on a variety of time-varying volumetric data sets. Both quantitative and qualitative results, such as PSNR, LPIPS, and rendered images, demonstrate that KD-INR surpasses the state-of-the-art approaches, including learning-based (i.e., CoordNet, NeurComp, and SIREN) and lossy compression (i.e., SZ3, ZFP, and TTHRESH) methods, at various compression ratios ranging from hundreds to ten thousand.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Han Jun"],"doi":"10.1109/TVCG.2023.3345373","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Time-varying data compression, implicit neural representation, knowledge distillation, volume visualization."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233345373","time_end":"","time_stamp":"","time_start":"","title":"KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation","uid":"v-tvcg-20233345373","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Quantum computing offers significant speedup compared to classical computing, which has led to a growing interest among users in learning and applying quantum computing across various applications. However, quantum circuits, which are fundamental for implementing quantum algorithms, can be challenging for users to understand due to their underlying logic, such as the temporal evolution of quantum states and the effect of quantum amplitudes on the probability of basis quantum states. To fill this research gap, we propose QuantumEyes, an interactive visual analytics system to enhance the interpretability of quantum circuits through both global and local levels. For the global-level analysis, we present three coupled visualizations to delineate the changes of quantum states and the underlying reasons: a Probability Summary View to overview the probability evolution of quantum states; a State Evolution View to enable an in-depth analysis of the influence of quantum gates on the quantum states; a Gate Explanation View to show the individual qubit states and facilitate a better understanding of the effect of quantum gates. For the local-level analysis, we design a novel geometrical visualization dandelion chart to explicitly reveal how the quantum amplitudes affect the probability of the quantum state. We thoroughly evaluated QuantumEyes as well as the novel dandelion chart integrated into it through two case studies on different types of quantum algorithms and in-depth expert interviews with 12 domain experts. The results demonstrate the effectiveness and usability of our approach in enhancing the interpretability of quantum circuits.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shaolun Ruan"],"doi":"10.1109/TVCG.2023.3332999","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, design study, interpretability, quantum computing."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233332999","time_end":"","time_stamp":"","time_start":"","title":"QuantumEyes: Towards Better Interpretability of Quantum Circuits","uid":"v-tvcg-20233332999","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents a computational framework for the Wasserstein auto-encoding of merge trees (MT-WAE), a novel extension of the classical auto-encoder neural network architecture to the Wasserstein metric space of merge trees. In contrast to traditional auto-encoders which operate on vectorized data, our formulation explicitly manipulates merge trees on their associated metric space at each layer of the network, resulting in superior accuracy and interpretability. Our novel neural network approach can be interpreted as a non-linear generalization of previous linear attempts [79] at merge tree encoding. It also trivially extends to persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our algorithms, with MT-WAE computations in the orders of minutes on average. We show the utility of our contributions in two applications adapted from previous work on merge tree encoding [79]. First, we apply MT-WAE to merge tree compression, by concisely representing them with their coordinates in the final layer of our auto-encoder. Second, we document an application to dimensionality reduction, by exploiting the latent space of our auto-encoder, for the visual analysis of ensemble data. We illustrate the versatility of our framework by introducing two penalty terms, to help preserve in the latent space both the Wasserstein distances between merge trees, as well as their clusters. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used for reproducibility.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Julien Tierny"],"doi":"10.1109/TVCG.2023.3334755","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Topological data analysis, ensemble data, persistence diagrams, merge trees, auto-encoders, neural networks"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233334755","time_end":"","time_stamp":"","time_start":"","title":"Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)","uid":"v-tvcg-20233334755","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present VoxAR, a method to facilitate an effective visualization of volume-rendered objects in optical see-through head-mounted displays (OST-HMDs). The potential of augmented reality (AR) to integrate digital information into the physical world provides new opportunities for visualizing and interpreting scientific data. However, a limitation of OST-HMD technology is that rendered pixels of a virtual object can interfere with the colors of the real-world, making it challenging to perceive the augmented virtual information accurately. We address this challenge in a two-step approach. First, VoxAR determines an appropriate placement of the volume-rendered object in the real-world scene by evaluating a set of spatial and environmental objectives, managed as user-selected preferences and pre-defined constraints. We achieve a real-time solution by implementing the objectives using a GPU shader language. Next, VoxAR adjusts the colors of the input transfer function (TF) based on the real-world placement region. Specifically, we introduce a novel optimization method that adjusts the TF colors such that the resulting volume-rendered pixels are discernible against the background and the TF maintains the perceptual mapping between the colors and data intensity values. Finally, we present an assessment of our approach through objective evaluations and subjective user studies.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Saeed Boorboor"],"doi":"10.1109/TVCG.2023.3340770","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Adaptive Visualization, Situated Visualization, Augmented Reality, Volume Rendering"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233340770","time_end":"","time_stamp":"","time_start":"","title":"VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality","uid":"v-tvcg-20233340770","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Label quality issues, such as noisy labels and imbalanced class distributions, have negative effects on model performance. Automatic reweighting methods identify problematic samples with label quality issues by recognizing their negative effects on validation samples and assigning lower weights to them. However, these methods fail to achieve satisfactory performance when the validation samples are of low quality. To tackle this, we develop Reweighter, a visual analysis tool for sample reweighting. The reweighting relationships between validation samples and training samples are modeled as a bipartite graph. Based on this graph, a validation sample improvement method is developed to improve the quality of validation samples. Since the automatic improvement may not always be perfect, a co-cluster-based bipartite graph visualization is developed to illustrate the reweighting relationships and support the interactive adjustments to validation samples and reweighting results. The adjustments are converted into the constraints of the validation sample improvement method to further improve validation samples. We demonstrate the effectiveness of Reweighter in improving reweighting results through quantitative evaluation and two case studies.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Weikai Yang"],"doi":"10.1109/TVCG.2023.3345340","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233345340","time_end":"","time_stamp":"","time_start":"","title":"Interactive Reweighting for Mitigating Label Quality Issues","uid":"v-tvcg-20233345340","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We examined user preferences to combine multiple interaction modalities for collaborative interaction with data shown on large vertical displays. Large vertical displays facilitate visual data exploration and allow the use of diverse interaction modalities by multiple users at different distances from the screen. Yet, how to offer multiple interaction modalities is a non-trivial problem. We conducted an elicitation study with 20 participants that generated 1015 interaction proposals combining touch, speech, pen, and mid-air gestures. Given the opportunity to interact using these four 13 modalities, participants preferred speech interaction in 10 of 15 14 low-level tasks and direct manipulation for straightforward tasks 15 such as showing a tooltip or selecting. In contrast to previous work, 16 participants most favored unimodal and personal interactions. We 17 identified what we call collaborative synonyms among their interaction proposals and found that pairs of users collaborated either unimodally and simultaneously or multimodally and sequentially. We provide insights into how end-users associate visual exploration tasks with certain modalities and how they collaborate at different interaction distances using specific interaction modalities. The supplemental material is available at https://osf.io/m8zuh.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gabriela Molina Le\u00f3n"],"doi":"10.1109/TVCG.2023.3323150","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Multimodal interaction, collaborative work, large vertical displays, elicitation study, spatio-temporal data"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233323150","time_end":"","time_stamp":"","time_start":"","title":"Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays","uid":"v-tvcg-20233323150","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data integration is often performed to consolidate information from multiple disparate data sources during visual data analysis. However, integration operations are usually separate from visual analytics operations such as encode and filter in both interface design and empirical research. We conducted a preliminary user study to investigate whether and how data integration should be incorporated directly into the visual analytics process. We used two interface alternatives featuring contrasting approaches to the data preparation and analysis workflow: manual file-based ex-situ integration as a separate step from visual analytics operations; and automatic UI-based in-situ integration merged with visual analytics operations. Participants were asked to complete specific and free-form tasks with each interface, browsing for patterns, generating insights, and summarizing relationships between attributes distributed across multiple files. Analyzing participants' interactions and feedback, we found both task completion time and total interactions to be similar across interfaces and tasks, as well as unique integration strategies between interfaces and emergent behaviors related to satisficing and cognitive bias. Participants' time spent and interactions revealed that in-situ integration enabled users to spend more time on analysis tasks compared with ex-situ integration. Participants' integration strategies and analytical behaviors revealed differences in interface usage for generating and tracking hypotheses and insights. With these results, we synthesized preliminary guidelines for designing future visual analytics interfaces that can support integrating attributes throughout an active analysis process.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Adam Coscia"],"doi":"10.1109/TVCG.2023.3334513","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, Data integration, User interface design, Integration strategies, Analytical behaviors."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233334513","time_end":"","time_stamp":"","time_start":"","title":"Preliminary Guidelines For Combining Data Integration and Visual Data Analysis","uid":"v-tvcg-20233334513","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We report on challenges and considerations for supporting design processes for visualizations in motion embedded in sports videos. We derive our insights from analyzing swimming race visualizations and motion-related data, building a technology probe, as well as a study with designers. Understanding how to design situated visualizations in motion is important for a variety of contexts. Competitive sports coverage, in particular, increasingly includes information on athlete or team statistics and records. Although moving visual representations attached to athletes or other targets are starting to appear, systematic investigations on how to best support their design process in the context of sports videos are still missing. Our work makes several contributions in identifying opportunities for visualizations to be added to swimming competition coverage but, most importantly, in identifying requirements and challenges for designing situated visualizations in motion. Our investigations include the analysis of a survey with swimming enthusiasts on their motion-related information needs, an ideation workshop to collect designs and elicit design challenges, the design of a technology probe that allows to create embedded visualizations in motion based on real data, and an evaluation with visualization designers that aimed to understand the benefits of designing directly on videos.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lijie Yao"],"doi":"10.1109/TVCG.2023.3341990","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Sports, Videos, Probes, Surveys, Authoring systems, Games, Design framework, Embedded visualization, Sports analytics, Visualization in motion"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233341990","time_end":"","time_stamp":"","time_start":"","title":"Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos","uid":"v-tvcg-20233341990","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Recent growth in the popularity of large language models has led to their increased usage for summarizing, predicting, and generating text, making it vital to help researchers and engineers understand how and why they work. We present KnowledgeVIS , a human-in-the-loop visual analytics system for interpreting language models using fill-in-the-blank sentences as prompts. By comparing predictions between sentences, KnowledgeVIS reveals learned associations that intuitively connect what language models learn during training to natural language tasks downstream, helping users create and test multiple prompt variations, analyze predicted words using a novel semantic clustering technique, and discover insights using interactive visualizations. Collectively, these visualizations help users identify the likelihood and uniqueness of individual predictions, compare sets of predictions between prompts, and summarize patterns and relationships between predictions across all prompts. We demonstrate the capabilities of KnowledgeVIS with feedback from six NLP experts as well as three different use cases: (1) probing biomedical knowledge in two domain-adapted models; and (2) evaluating harmful identity stereotypes and (3) discovering facts and relationships between three general-purpose models.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Adam Coscia"],"doi":"10.1109/TVCG.2023.3346713","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, language models, prompting, interpretability, machine learning."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233346713","time_end":"","time_stamp":"","time_start":"","title":"KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts","uid":"v-tvcg-20233346713","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Ensembles of contours arise in various applications like simulation, computer-aided design, and semantic segmentation. Uncovering ensemble patterns and analyzing individual members is a challenging task that suffers from clutter. Ensemble statistical summarization can alleviate this issue by permitting analyzing ensembles' distributional components like the mean and median, confidence intervals, and outliers. Contour boxplots, powered by Contour Band Depth (CBD), are a popular non-parametric ensemble summarization method that benefits from CBD's generality, robustness, and theoretical properties. In this work, we introduce Inclusion Depth (ID), a new notion of contour depth with three defining characteristics. First, ID is a generalization of functional Half-Region Depth, which offers several theoretical guarantees. Second, ID relies on a simple principle: the inside/outside relationships between contours. This facilitates implementing ID and understanding its results. Third, the computational complexity of ID scales quadratically in the number of members of the ensemble, improving CBD's cubic complexity. This also in practice speeds up the computation enabling the use of ID for exploring large contour ensembles or in contexts requiring multiple depth evaluations like clustering. In a series of experiments on synthetic data and case studies with meteorological and segmentation data, we evaluate ID's performance and demonstrate its capabilities for the visual analysis of contour ensembles.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicol\u00e1s Ch\u00e1ves"],"doi":"10.1109/TVCG.2024.3350076","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Uncertainty visualization, contours, ensemble summarization, depth statistics."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243350076","time_end":"","time_stamp":"","time_start":"","title":"Inclusion Depth for Contour Ensembles","uid":"v-tvcg-20243350076","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Interactive visualization can support fluid exploration but is often limited to predetermined tasks. Scripting can support a vast range of queries but may be more cumbersome for free-form exploration. Embedding interactive visualization in scripting environments, such as computational notebooks, provides an opportunity to leverage the strengths of both direct manipulation and scripting. We investigate interactive visualization design methodology, choices, and strategies under this paradigm through a design study of calling context trees used in performance analysis, a field which exemplifies typical exploratory data analysis workflows with big data and hard to define problems. We first produce a formal task analysis assigning tasks to graphical or scripting contexts based on their specificity, frequency, and suitability. We then design a notebook-embedded interactive visualization and validate it with intended users. In a follow-up study, we present participants with multiple graphical and scripting interaction modes to elicit feedback about notebook-embedded visualization design, finding consensus in support of the interaction model. We report and reflect on observations regarding the process and design implications for combining visualization and scripting in notebooks.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Connor Scully-Allison"],"doi":"10.1109/TVCG.2024.3354561","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Exploratory Data Analysis, Interactive Data Analysis, Computational Notebooks, Hybrid Visualization-Scripting, Visualization Design"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243354561","time_end":"","time_stamp":"","time_start":"","title":"Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments","uid":"v-tvcg-20243354561","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"News articles containing data visualizations play an important role in informing the public on issues ranging from public health to politics. Recent research on the persuasive appeal of data visualizations suggests that prior attitudes can be notoriously difficult to change. Inspired by an NYT article, we designed two experiments to evaluate the impact of elicitation and contrasting narratives on attitude change, recall, and engagement. We hypothesized that eliciting prior beliefs leads to more elaborative thinking that ultimately results in higher attitude change, better recall, and engagement. Our findings revealed that visual elicitation leads to higher engagement in terms of feelings of surprise. While there is an overall attitude change across all experiment conditions, we did not observe a significant effect of belief elicitation on attitude change. With regard to recall error, while participants in the draw trend elicitation exhibited significantly lower recall error than participants in the categorize trend condition, we found no significant difference in recall error when comparing elicitation conditions to no elicitation. In a follow-up study, we added contrasting narratives with the purpose of making the main visualization (communicating data on the focal issue) appear strikingly different. Compared to the results of Study 1, we found that contrasting narratives improved engagement in terms of surprise and interest but interestingly resulted in higher recall error and no significant change in attitude. We discuss the effects of elicitation and contrasting narratives in the context of topic involvement and the strengths of temporal trends encoded in the data visualization.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Milad Rogha"],"doi":"10.1109/TVCG.2024.3355884","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Market Research, Visualization, Uncertainty, Data Models, Correlation, Attitude Control, Belief Elicitation, Visual Elicitation, Data Visualization, Contrasting Narratives"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243355884","time_end":"","time_stamp":"","time_start":"","title":"The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization","uid":"v-tvcg-20243355884","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Partitioning a dynamic network into subsets (i.e., snapshots) based on disjoint time intervals is a widely used technique for understanding how structural patterns of the network evolve. However, selecting an appropriate time window (i.e., slicing a dynamic network into snapshots) is challenging and time-consuming, often involving a trial-and-error approach to investigating underlying structural patterns. To address this challenge, we present MoNetExplorer, a novel interactive visual analytics system that leverages temporal network motifs to provide recommendations for window sizes and support users in visually comparing different slicing results. MoNetExplorer provides a comprehensive analysis based on window size, including (1) a temporal overview to identify the structural information, (2) temporal network motif composition, and (3) node-link-diagram-based details to enable users to identify and understand structural patterns at various temporal resolutions. To demonstrate the effectiveness of our system, we conducted a case study with network researchers using two real-world dynamic network datasets. Our case studies show that the system effectively supports users to gain valuable insights into the temporal and structural aspects of dynamic networks.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Seokweon Jung"],"doi":"10.1109/TVCG.2023.3337396","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, Measurement, Size measurement, Windows, Time measurement, Data visualization, Task analysis, Visual analytics, Dynamic networks, Temporal network motifs, Interactive network slicing"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233337396","time_end":"","time_stamp":"","time_start":"","title":"A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs","uid":"v-tvcg-20233337396","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We conduct two in-lab experiments (N=93) to evaluate the effectiveness of Gantt charts, extended Gantt charts, and stringline charts for visualizing fixed-order event sequence data. We first formulate five types of event sequences and define three types of sequence elements: point events, interval events, and the temporal gaps between them. Our two experiments focus on event sequences with a pre-defined, fixed order, and measure task error rates and completion time. The first experiment shows single sequences and assesses the three charts' performance in comparing event duration or gap. The second experiment shows multiple sequences and evaluates how well the charts reveal temporal patterns. The results suggest that when visualizing single fixed-order event sequences, 1) Gantt and extended Gantt charts lead to comparable error rates in the duration-comparing task; 2) Gantt charts exhibit either shorter or equal completion time than extended Gantt charts; 3) both Gantt and extended Gantt charts demonstrate shorter completion times than stringline charts; 4) however, stringline charts outperform the other two charts with fewer errors in the comparing task when event type counts are high. Additionally, when visualizing multiple point-based fixed-order event sequences, stringline charts require less time than Gantt charts for people to find temporal patterns. Based on these findings, we discuss design opportunities for visualizing fixed-order event sequences and discuss future avenues for optimizing these charts.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Junxiu Tang"],"doi":"10.1109/TVCG.2024.3358919","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Gantt chart, stringline chart, Marey's graph, event sequence, empirical study"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243358919","time_end":"","time_stamp":"","time_start":"","title":"A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts","uid":"v-tvcg-20243358919","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Seasonal-trend decomposition based on loess (STL) is a powerful tool to explore time series data visually. In this paper, we present an extension of STL to uncertain data, named uncertainty-aware STL (UASTL). Our method propagates multivariate Gaussian distributions mathematically exactly through the entire analysis and visualization pipeline. Thereby, stochastic quantities shared between the components of the decomposition are preserved. Moreover, we present application scenarios with uncertainty modeling based on Gaussian processes, e.g., data with uncertain areas or missing values. Besides these mathematical results and modeling aspects, we introduce visualization techniques that address the challenges of uncertainty visualization and the problem of visualizing highly correlated components of a decomposition. The global uncertainty propagation enables the time series visualization with STL-consistent samples, the exploration of correlation between and within decomposition's components, and the analysis of the impact of varying uncertainty. Finally, we show the usefulness of UASTL and the importance of uncertainty visualization with several examples. Thereby, a comparison with conventional STL is performed.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tim Krake"],"doi":"10.1109/TVCG.2024.3364388","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["- I.6.9.g Visualization techniques and methodologies < I.6.9 Visualization < I.6 Simulation, Modeling, and Visualization < I Compu - G.3 Probability and Statistics < G Mathematics of Computing - G.3.n Statistical computing < G.3 Probability and Statistics < G Mathematics of Computing - G.3.p Stochastic processes < G.3 Probability and Statistics < G Mathematics of Computing"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243364388","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess","uid":"v-tvcg-20243364388","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The need to understand the structure of hierarchical or high-dimensional data is present in a variety of fields. Hyperbolic spaces have proven to be an important tool for embedding computations and analysis tasks as their non-linear nature lends itself well to tree or graph data. Subsequently, they have also been used in the visualization of high-dimensional data, where they exhibit increased embedding performance. However, none of the existing dimensionality reduction methods for embedding into hyperbolic spaces scale well with the size of the input data. That is because the embeddings are computed via iterative optimization schemes and the computation cost of every iteration is quadratic in the size of the input. Furthermore, due to the non-linear nature of hyperbolic spaces, Euclidean acceleration structures cannot directly be translated to the hyperbolic setting. This paper introduces the first acceleration structure for hyperbolic embeddings, building upon a polar quadtree. We compare our approach with existing methods and demonstrate that it computes embeddings of similar quality in significantly less time. Implementation and scripts for the experiments can be found at this https URL.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Martin Skrodzki"],"doi":"10.1109/TVCG.2024.3364841","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Quantitative Methods (q-bio.QM); Machine Learning (stat.ML) Dimensionality reduction, t-SNE, hyperbolic embedding, acceleration structure"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243364841","time_end":"","time_stamp":"","time_start":"","title":"Accelerating hyperbolic t-SNE","uid":"v-tvcg-20243364841","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Implicit Neural representations (INRs) are widely used for scientific data reduction and visualization by modeling the function that maps a spatial location to a data value. Without any prior knowledge about the spatial distribution of values, we are forced to sample densely from INRs to perform visualization tasks like iso-surface extraction which can be very computationally expensive. Recently, range analysis has shown promising results in improving the efficiency of geometric queries, such as ray casting and hierarchical mesh extraction, on INRs for 3D geometries by using arithmetic rules to bound the output range of the network within a spatial region. However, the analysis bounds are often too conservative for complex scientific data. In this paper, we present an improved technique for range analysis by revisiting the arithmetic rules and analyzing the probability distribution of the network output within a spatial region. We model this distribution efficiently as a Gaussian distribution by applying the central limit theorem. Excluding low probability values, we are able to tighten the output bounds, resulting in a more accurate estimation of the value range, and hence more accurate identification of iso-surface cells and more efficient iso-surface extraction on INRs. Our approach demonstrates superior performance in terms of the iso-surface extraction time on four datasets compared to the original range analysis method and can also be generalized to other geometric query tasks.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haoyu Li"],"doi":"10.1109/TVCG.2024.3365089","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Iso-surface extraction, implicit neural representation, uncertainty propagation, affine arithmetic."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243365089","time_end":"","time_stamp":"","time_start":"","title":"Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation","uid":"v-tvcg-20243365089","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Currently, growing data sources and long-running algorithms impede user attention and interaction with visual analytics applications. Progressive visualization (PV) and visual analytics (PVA) alleviate this problem by allowing immediate feedback and interaction with large datasets and complex computations, avoiding waiting for complete results by using partial results improving with time. Yet, creating a progressive visualization requires more effort than a regular visualization but also opens up new possibilities, such as steering the computations towards more relevant parts of the data, thus saving computational resources. However, there is currently no comprehensive overview of the design space for progressive visualization systems. We surveyed the related work of PV and derived a new taxonomy for progressive visualizations by systematically categorizing all PV publications that included visualizations with progressive features. Progressive visualizations can be categorized by well-known visualization taxonomies, but we also found that progressive visualizations can be distinguished by the way they manage their data processing, data domain, and visual update. Furthermore, we identified key properties such as uncertainty, steering, visual stability, and real-time processing that are significantly different with progressive applications. We also collected evaluation methodologies reported by the publications and conclude with statistical findings, research gaps, and open challenges. A continuously updated visual browser of the survey data is available at visualsurvey.net/pva.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alex Ulmer"],"doi":"10.1109/TVCG.2023.3346641","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Convergence, Visual analytics, Taxonomy Surveys, Rendering (computer graphics), Task analysis, Progressive Visual Analytics, Progressive Visualization, Taxonomy, State-of-the-Art Report, Survey"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233346641","time_end":"","time_stamp":"","time_start":"","title":"A Survey on Progressive Visualization","uid":"v-tvcg-20233346641","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data visualization and journalism are deeply connected. From early infographics to recent data-driven storytelling, visualization has become an integrated part of contemporary journalism, primarily as a communication artifact to inform the general public. Data journalism, harnessing the power of data visualization, has emerged as a bridge between the growing volume of data and our society. Visualization research that centers around data storytelling has sought to understand and facilitate such journalistic endeavors. However, a recent metamorphosis in journalism has brought broader challenges and opportunities that extend beyond mere communication of data. We present this article to enhance our understanding of such transformations and thus broaden visualization research's scope and practical contribution to this evolving field. We first survey recent significant shifts, emerging challenges, and computational practices in journalism. We then summarize six roles of computing in journalism and their implications. Based on these implications, we provide propositions for visualization research concerning each role. Ultimately, by mapping the roles and propositions onto a proposed ecological model and contextualizing existing visualization research, we surface seven general topics and a series of research agendas that can guide future visualization research at this intersection.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yu Fu"],"doi":"10.1109/TVCG.2023.3287585","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Computational journalism, data visualization, data-driven storytelling, journalism"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233287585","time_end":"","time_stamp":"","time_start":"","title":"More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism","uid":"v-tvcg-20233287585","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The increasing ubiquity of data in everyday life has elevated the importance of data literacy and accessible data representations, particularly for individuals with disabilities. While prior research predominantly focuses on the needs of the visually impaired, our survey aims to broaden this scope by investigating accessible data representations across a more inclusive spectrum of disabilities. After conducting a systematic review of 152 accessible data representation papers from ACM and IEEE databases, we found that roughly 78% of existing articles center on vision impairments. In this paper, we conduct a comprehensive review of the remaining 22% of papers focused on underrepresented disability communities. We developed categorical dimensions based on accessibility, visualization, and human-computer interaction to classify the papers. These dimensions include the community of focus, issues addressed, contribution type, study methods, participants, data type, visualization type, and data domain. Our work redefines accessible data representations by illustrating their application for disabilities beyond those related to vision. Building on our literature review, we identify and discuss opportunities for future research in accessible data representations. All supplemental materials are available at https://osf.io/ yv4xm/?view only=7b36a3fbf7a14b3888029966faa3def9.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Brianna Wimer"],"doi":"10.1109/TVCG.2024.3356566","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Accessibility, Data Representations."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243356566","time_end":"","time_stamp":"","time_start":"","title":"Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations","uid":"v-tvcg-20243356566","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A multitude of studies have been conducted on graph drawing, but many existing methods only focus on optimizing a single aesthetic aspect of graph layouts. There are a few existing methods that attempt to develop a flexible solution for optimizing different aesthetic aspects measured by different aesthetic criteria. Furthermore, thanks to the significant advance in deep learning techniques, several deep learning-based layout methods were proposed recently, which have demonstrated the advantages of the deep learning approaches for graph drawing. However, none of these existing methods can be directly applied to optimizing non-differentiable criteria without special accommodation. In this work, we propose a novel Generative Adversarial Network (GAN) based deep learning framework for graph drawing, called SmartGD, which can optimize any quantitative aesthetic goals even though they are non-differentiable. In the cases where the aesthetic goal is too abstract to be described mathematically, SmartGD can draw graphs in a similar style as a collection of good layout examples, which might be selected by humans based on the abstract aesthetic goal. To demonstrate the effectiveness and efficiency of SmartGD, we conduct experiments on minimizing stress, minimizing edge crossing, maximizing crossing angle, and a combination of multiple aesthetics. Compared with several popular graph drawing algorithms, the experimental results show that SmartGD achieves good performance both quantitatively and qualitatively.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xiaoqi Wang"],"doi":"10.1109/TVCG.2023.3306356","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233306356","time_end":"","time_stamp":"","time_start":"","title":"SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals","uid":"v-tvcg-20233306356","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Is it true that if citizens understand hurricane probabilities, they will make more rational decisions for evacuation? Finding answers to such questions is not straightforward in the literature because the terms \u201c judgment \u201d and \u201c decision making \u201d are often used interchangeably. This terminology conflation leads to a lack of clarity on whether people make suboptimal decisions because of inaccurate judgments of information conveyed in visualizations or because they use alternative yet currently unknown heuristics. To decouple judgment from decision making, we review relevant concepts from the literature and present two preregistered experiments (N=601) to investigate if the task (judgment vs. decision making), the scenario (sports vs. humanitarian), and the visualization (quantile dotplots, density plots, probability bars) affect accuracy. While experiment 1 was inconclusive, we found evidence for a difference in experiment 2. Contrary to our expectations and previous research, which found decisions less accurate than their direct-equivalent judgments, our results pointed in the opposite direction. Our findings further revealed that decisions were less vulnerable to status-quo bias, suggesting decision makers may disfavor responses associated with inaction. We also found that both scenario and visualization types can influence people's judgments and decisions. Although effect sizes are not large and results should be interpreted carefully, we conclude that judgments cannot be safely used as proxy tasks for decision making, and discuss implications for visualization research and beyond. Materials and preregistrations are available at https://osf.io/ufzp5/?view_only=adc0f78a23804c31bf7fdd9385cb264f.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ba\u015fak Oral"],"doi":"10.1109/TVCG.2023.3346640","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Task analysis, Decision making, Visualization, Bars, Sports, Terminology, Cognition, Decision Making, Judgment, Psychology, Visualization"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233346640","time_end":"","time_stamp":"","time_start":"","title":"Decoupling Judgment and Decision Making: A Tale of Two Tails","uid":"v-tvcg-20233346640","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Small multiples are a popular visualization method, displaying different views of a dataset using multiple frames, often with the same scale and axes. However, there is a need to address their potential constraints, especially in the context of human cognitive capacity limits. These limits dictate the maximum information our mind can process at once. We explore the issue of capacity limitation by testing competing theories that describe how the number of frames shown in a display, the scale of the frames, and time constraints impact user performance with small multiples of line charts in an energy grid scenario. In two online studies (Experiment 1 n = 141 and Experiment 2 n = 360) and a follow-up eye-tracking analysis (n=5),we found a linear decline in accuracy with increasing frames across seven tasks, which was not fully explained by differences in frame size, suggesting visual search challenges. Moreover, the studies demonstrate that highlighting specific frames can mitigate some visual search difficulties but, surprisingly, not eliminate them. This research offers insights into optimizing the utility of small multiples by aligning them with human limitations.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Helia Hosseinpour"],"doi":"10.1109/TVCG.2024.3372620","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Cognition, small multiples, time-series data"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243372620","time_end":"","time_stamp":"","time_start":"","title":"Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs","uid":"v-tvcg-20243372620","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Scatterplots provide a visual representation of bivariate data (or 2D embeddings of multivariate data) that allows for effective analyses of data dependencies, clusters, trends, and outliers. Unfortunately, classical scatterplots suffer from scalability issues, since growing data sizes eventually lead to overplotting and visual clutter on a screen with a fixed resolution, which hinders the data analysis process. We propose an algorithm that compensates for irregular sample distributions by a smooth transformation of the scatterplot's visual domain. Our algorithm evaluates the scatterplot's density distribution to compute a regularization mapping based on integral images of the rasterized density function. The mapping preserves the samples' neighborhood relations. Few regularization iterations suffice to achieve a nearly uniform sample distribution that efficiently uses the available screen space. We further propose approaches to visually convey the transformation that was applied to the scatterplot and compare them in a user study. We present a novel parallel algorithm for fast GPU-based integral-image computation, which allows for integrating our de-cluttering approach into interactive visual data analysis systems.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hennes Rave"],"doi":"10.1109/TVCG.2024.3381453","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243381453","time_end":"","time_stamp":"","time_start":"","title":"De-cluttering Scatterplots with Integral Images","uid":"v-tvcg-20243381453","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Advanced manufacturing creates increasingly complex objects with material compositions that are often difficult to characterize by a single modality. Our domain scientists are going beyond traditional methods by employing both X-ray and neutron computed tomography to obtain complementary representations expected to better resolve material boundaries. However, the use of two modalities creates its own challenges for visualization, requiring either complex adjustments of multimodal transfer functions or the need for multiple views. Together with experts in nondestructive evaluation, we designed a novel interactive multimodal visualization approach to create a combined view of the co-registered X-ray and neutron acquisitions of industrial objects. Using an automatic topological segmentation of the bivariate histogram of X-ray and neutron values as a starting point, the system provides a simple yet effective interface to easily create, explore, and adjust a multimodal isualization. We propose a widget with simple brushing interactions that enables the user to quickly correct the segmented histogram results. Our semiautomated system enables domain experts to intuitively explore large multimodal datasets without the need for either advanced segmentation algorithms or knowledge of visualization echniques. We demonstrate our approach using synthetic examples, industrial phantom objects created to stress multimodal scanning techniques, and real-world objects, and we discuss expert feedback.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xuan Huang"],"doi":"10.1109/TVCG.2024.3382607","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243382607","time_end":"","time_stamp":"","time_start":"","title":"Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data","uid":"v-tvcg-20243382607","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization Recommendation Systems (VRSs) are a novel and challenging field of study aiming to help generate insightful visualizations from data and support non-expert users in information discovery. Among the many contributions proposed in this area, some systems embrace the ambitious objective of imitating human analysts to identify relevant relationships in data and make appropriate design choices to represent these relationships with insightful charts. We denote these systems as \"agnostic\" VRSs since they do not rely on human-provided constraints and rules but try to learn the task autonomously. Despite the high application potential of agnostic VRSs, their progress is hindered by several obstacles, including the absence of standardized datasets to train recommendation algorithms, the difficulty of learning design rules, and defining quantitative criteria for evaluating the perceptual effectiveness of generated plots. This paper summarizes the literature on agnostic VRSs and outlines promising future research directions.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Luca Podo"],"doi":"10.1109/TVCG.2024.3374571","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243374571","time_end":"","time_stamp":"","time_start":"","title":"Agnostic Visual Recommendation Systems: Open Challenges and Future Directions","uid":"v-tvcg-20243374571","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Time-stamped event sequences (TSEQs) are time-oriented data without value information, shifting the focus of users to the exploration of temporal event occurrences. TSEQs exist in application domains, such as sleeping behavior, earthquake aftershocks, and stock market crashes. Domain experts face four challenges, for which they could use interactive and visual data analysis methods. First, TSEQs can be large with respect to both the number of sequences and events, often leading to millions of events. Second, domain experts need validated metrics and features to identify interesting patterns. Third, after identifying interesting patterns, domain experts contextualize the patterns to foster sensemaking. Finally, domain experts seek to reduce data complexity by data simplification and machine learning support. We present IVESA, a visual analytics approach for TSEQs. It supports the analysis of TSEQs at the granularities of sequences and events, supported with metrics and feature analysis tools. IVESA has multiple linked views that support overview, sort+filter, comparison, details-on-demand, and metadata relation-seeking tasks, as well as data simplification through feature analysis, interactive clustering, filtering, and motif detection and simplification. We evaluated IVESA with three case studies and a user study with six domain experts working with six different datasets and applications. Results demonstrate the usability and generalizability of IVESA across applications and cases that had up to 1,000,000 events.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["J\u00fcrgen Bernard"],"doi":"10.1109/TVCG.2024.3382760","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Time-Stamped Event Sequences, Time-Oriented Data, Visual Analytics, Data-First Design Study, Iterative Design, Visual Interfaces, User Evaluation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243382760","time_end":"","time_stamp":"","time_start":"","title":"Visual Analysis of Time-Stamped Event Sequences","uid":"v-tvcg-20243382760","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Genomics is at the core of precision medicine, and there are high expectations on genomics-enabled improvement of patient outcomes in the years to come. Around the world, initiatives to increase the use of DNA sequencing in clinical routine are being deployed, such as the use of broad panels in the standard care for oncology patients. Such a development comes at the cost of increased demands on throughput in genomic data analysis. In this paper, we use the task of copy number variant (CNV) analysis as a context for exploring visualization concepts for clinical genomics. CNV calls are generated algorithmically, but time-consuming manual intervention is needed to separate relevant findings from irrelevant ones in the resulting large call candidate lists. We present a visualization environment, named Copycat, to support this review task in a clinical scenario. Key components are a scatter-glyph plot replacing the traditional list visualization, and a glyph representation designed for at-a-glance relevance assessments. Moreover, we present results from a formative evaluation of the prototype by domain specialists, from which we elicit insights to guide both prototype improvements and visualization for clinical genomics in general.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Emilia St\u00e5hlbom"],"doi":"10.1109/TVCG.2024.3385118","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, genomics, copy number variants, clinical decision support, evaluation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243385118","time_end":"","time_stamp":"","time_start":"","title":"Visualization for diagnostic review of copy number variants in complex DNA sequencing data","uid":"v-tvcg-20243385118","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This system paper documents the technical foundations for the extension of the Topology ToolKit (TTK) to distributed-memory parallelism with the Message Passing Interface (MPI). While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a versatile approach (supporting both triangulated domains and regular grids) for the support of topological analysis pipelines, i.e. a sequence of topological algorithms interacting together, possibly on distinct numbers of processes. While developing this extension, we faced several algorithmic and software engineering challenges, which we document in this paper. We describe an MPI extension of TTK\u2019s data structure for triangulation representation and traversal, a central component to the global performance and generality of TTK\u2019s topological implementations. We also introduce an intermediate interface between TTK and MPI, both at the global pipeline level, and at the fine-grain algorithmic level. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Detailed performance analyses show that parallel efficiencies range from 20% to 80% (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (120 billion vertices) on a standard cluster with 64 nodes (for a total of 1536 cores). Finally, we provide a roadmap for the completion of TTK\u2019s MPI extension, along with generic recommendations for each algorithm communication category.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Julien Tierny"],"doi":"10.1109/TVCG.2024.3390219","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Topological data analysis, high-performance computing, distributed-memory algorithms."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243390219","time_end":"","time_stamp":"","time_start":"","title":"TTK is Getting MPI-Ready","uid":"v-tvcg-20243390219","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The use of natural language interfaces (NLIs) to create charts is becoming increasingly popular due to the intuitiveness of natural language interactions. One key challenge in this approach is to accurately capture user intents and transform them to proper chart specifications. This obstructs the wide use of NLI in chart generation, as users' natural language inputs are generally abstract (i.e., ambiguous or under-specified), without a clear specification of visual encodings. Recently, pre-trained large language models (LLMs) have exhibited superior performance in understanding and generating natural language, demonstrating great potential for downstream tasks. Inspired by this major trend, we propose ChartGPT, generating charts from abstract natural language inputs. However, LLMs are struggling to address complex logic problems. To enable the model to accurately specify the complex parameters and perform operations in chart generation, we decompose the generation process into a step-by-step reasoning pipeline, so that the model only needs to reason a single and specific sub-task during each run. Moreover, LLMs are pre-trained on general datasets, which might be biased for the task of chart generation. To provide adequate visualization knowledge, we create a dataset consisting of abstract utterances and charts and improve model performance through fine-tuning. We further design an interactive interface for ChartGPT that allows users to check and modify the intermediate outputs of each step. The effectiveness of the proposed system is evaluated through quantitative evaluations and a user study.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuan Tian"],"doi":"10.1109/TVCG.2024.3368621","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Natural language interfaces, large language models, data visualization"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243368621","time_end":"","time_stamp":"","time_start":"","title":"ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language","uid":"v-tvcg-20243368621","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qing Chen"],"doi":"10.1109/TVCG.2024.3383089","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Representation Learning, Multi-view Visualization, Visual Storytelling, Visualization Embedding"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243383089","time_end":"","time_stamp":"","time_start":"","title":"Chart2Vec: A Universal Embedding of Context-Aware Visualizations","uid":"v-tvcg-20243383089","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The issue of traffic congestion poses a significant obstacle to the development of global cities. One promising solution to tackle this problem is intelligent traffic signal control (TSC). Recently, TSC strategies leveraging reinforcement learning (RL) have garnered attention among researchers. However, the evaluation of these models has primarily relied on fixed metrics like reward and queue length. This limited evaluation approach provides only a narrow view of the model\u2019s decision-making process, impeding its practical implementation. Moreover, effective TSC necessitates coordinated actions across multiple intersections. Existing visual analysis solutions fall short when applied in multi-agent settings. In this study, we delve into the challenge of interpretability in multi-agent reinforcement learning (MARL), particularly within the context of TSC. We propose MARLens, a visual analytics system tailored to understand MARL-based TSC. Our system serves as a versatile platform for both RL and TSC researchers. It empowers them to explore the model\u2019s features from various perspectives, revealing its decision-making processes and shedding light on interactions among different agents. To facilitate quick identification of critical states, we have devised multiple visualization views, complemented by a traffic simulation module that allows users to replay specific training scenarios. To validate the utility of our proposed system, we present three comprehensive case studies, incorporate insights from domain experts through interviews, and conduct a user study. These collective efforts underscore the feasibility and effectiveness of MARLens in enhancing our understanding of MARL-based TSC systems and pave the way for more informed and efficient traffic management strategies.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haipeng Zeng"],"doi":"10.1109/TVCG.2024.3392587","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Traffic signal control, multi-agent, reinforcement learning, visual analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243392587","time_end":"","time_stamp":"","time_start":"","title":"MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics","uid":"v-tvcg-20243392587","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Areas of interest (AOIs) are well-established means of providing semantic information for visualizing, analyzing, and classifying gaze data. However, the usual manual annotation of AOIs is time consuming and further impaired by ambiguities in label assignments. To address these issues, we present an interactive labeling approach that combines visualization, machine learning, and user-centered explainable annotation. Our system provides uncertainty-aware visualization to build trust in classification with an increasing number of annotated examples. It combines specifically designed EyeFlower glyphs, dimensionality reduction, and selection and exploration techniques in an integrated workflow. The approach is versatile and hardware-agnostic, supporting video stimuli from stationary and unconstrained mobile eye trackin alike. We conducted an expert review to assess labeling strategies and trust building.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maurice Koch"],"doi":"10.1109/TVCG.2024.3392476","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, eye tracking, uncertainty, active learning, trust building"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243392476","time_end":"","time_stamp":"","time_start":"","title":"Active Gaze Labeling: Visualization for Trust Building","uid":"v-tvcg-20243392476","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Dimensionality reduction (DR) algorithms are diverse and widely used for analyzing high-dimensional data. Various metrics and tools have been proposed to evaluate and interpret the DR results. However, most metrics and methods fail to be well generalized to measure any DR results from the perspective of original distribution fidelity or lack interactive exploration of DR results. There is still a need for more intuitive and quantitative analysis to interactively explore high-dimensional data and improve interpretability. We propose a metric and a generalized algorithm-agnostic approach based on the concept of capacity to evaluate and analyze the DR results. Based on our approach, we develop a visual analytic system HiLow for exploring high-dimensional data and projections. We also propose a mixed-initiative recommendation algorithm that assists users in interactively DR results manipulation. Users can compare the differences in data distribution after the interaction through HiLow. Furthermore, we propose a novel visualization design focusing on quantitative analysis of differences between high and low-dimensional data distributions. Finally, through user study and case studies, we validate the effectiveness of our approach and system in enhancing the interpretability of projections and analyzing the distribution of high and low-dimensional data.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Siming Chen"],"doi":"10.1109/TVCG.2023.3324851","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233324851","time_end":"","time_stamp":"","time_start":"","title":"Interpreting High-Dimensional Projections With Capacity","uid":"v-tvcg-20233324851","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The fund investment industry heavily relies on the expertise of fund managers, who bear the responsibility of managing portfolios on behalf of clients. With their investment knowledge and professional skills, fund managers gain a competitive advantage over the average investor in the market. Consequently, investors prefer entrusting their investments to fund managers rather than directly investing in funds. For these investors, the primary concern is selecting a suitable fund manager. While previous studies have employed quantitative or qualitative methods to analyze various aspects of fund managers, such as performance metrics, personal characteristics, and performance persistence, they often face challenges when dealing with a large candidate space. Moreover, distinguishing whether a fund manager's performance stems from skill or luck poses a challenge, making it difficult to align with investors' preferences in the selection process. To address these challenges, this study characterizes the requirements of investors in selecting suitable fund managers and proposes an interactive visual analytics system called FMLens. This system streamlines the fund manager selection process, allowing investors to efficiently assess and deconstruct fund managers' investment styles and abilities across multiple dimensions. Additionally, the system empowers investors to scrutinize and compare fund managers' performances. The effectiveness of the approach is demonstrated through two case studies and a qualitative user study. Feedback from domain experts indicates that the system excels in analyzing fund managers from diverse perspectives, enhancing the efficiency of fund manager evaluation and selection.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Longfei Chen"],"doi":"10.1109/TVCG.2024.3394745","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Financial Data, Fund Manager Selection, Visual Analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243394745","time_end":"","time_stamp":"","time_start":"","title":"FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments","uid":"v-tvcg-20243394745","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This article explores how the ability to recall information in data visualizations depends on the presentation technology. Participants viewed 10 Isotype visualizations on a 2D screen, in 3D, in Virtual Reality (VR) and in Mixed Reality (MR). To provide a fair comparison between the three 3D conditions, we used LIDAR to capture the details of the physical rooms, and used this information to create our textured 3D models. For all environments, we measured the number of visualizations recalled and their order (2D) or spatial location (3D, VR, MR). We also measured the number of syntactic and semantic features recalled. Results of our study show increased recall and greater richness of data understanding in the MR condition. Not only did participants recall more visualizations and ordinal/spatial positions in MR, but they also remembered more details about graph axes and data mappings, and more information about the shape of the data. We discuss how differences in the spatial and kinesthetic cues provided in these different environments could contribute to these results, and reasons why we did not observe comparable performance in the 3D and VR conditions.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Christophe Hurter"],"doi":"10.1109/TVCG.2023.3336588","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Three-dimensional displays, Virtual reality, Mixed reality, Electronic mail, Syntactics, Semantics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233336588","time_end":"","time_stamp":"","time_start":"","title":"Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D","uid":"v-tvcg-20233336588","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"With the rise of short-form video platforms and the increasing availability of data, we see the potential for people to share short-form videos embedded with data in situ (e.g., daily steps when running) to increase the credibility and expressiveness of their stories. However, creating and sharing such videos in situ is challenging since it involves multiple steps and skills (e.g., data visualization creation and video editing), especially for amateurs. By conducting a formative study (N=10) using three design probes, we collected the motivations and design requirements. We then built VisTellAR, a mobile AR authoring tool, to help amateur video creators embed data visualizations in short-form videos in situ. A two-day user study shows that participants (N=12) successfully created various videos with data visualizations in situ and they confirmed the ease of use and learning. AR pre-stage authoring was useful to assist people in setting up data visualizations in reality with more designs in camera movements and interaction with gestures and physical objects to storytelling.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Wai Tong"],"doi":"10.1109/TVCG.2024.3372104","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Personal data, augmented reality, data visualization, storytelling, short-form video"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243372104","time_end":"","time_stamp":"","time_start":"","title":"VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality","uid":"v-tvcg-20243372104","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The process of labeling medical text plays a crucial role in medical research. Nonetheless, creating accurately labeled medical texts of high quality is often a time-consuming task that requires specialized domain knowledge. Traditional methods for generating labeled data typically rely on rigid rule-based approaches, which may not adapt well to new tasks. While recent machine learning (ML) methodologies have mitigated the manual labeling efforts, configuring models to align with specific research requirements can be challenging for labelers without technical expertise. Moreover, automated labeling techniques, such as transfer learning, face difficulties in in directly incorporating expert input, whereas semi-automated methods, like data programming, allow knowledge integration through rules or knowledge bases but may lack continuous result refinement throughout the entire labeling process. In this study, we present a collaborative human-ML teaming workflow that seamlessly integrates visual cluster analysis and active learning to assist domain experts in labeling medical text with high efficiency. Additionally, we introduce an innovative neural network model called the embedding network, which incorporates expert insights to generate task-specific embeddings for medical texts. We integrate the workflow and embedding network into a visual analytics tool named KMTLabeler, equipped with coordinated multi-level views and interactions. Two illustrative case studies, along with a controlled user study, provide substantial evidence of the effectiveness of KMTLabeler in creating an efficient labeling environment for medical text classification.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["He Wang"],"doi":"10.1109/TVCG.2024.3406387","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Medical Text Labeling, Expert Knowledge, Embedding Network, Visual Cluster Analysis, Active Learning"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243406387","time_end":"","time_stamp":"","time_start":"","title":"KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification","uid":"v-tvcg-20243406387","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present a novel method for the interactive construction and rendering of extremely large molecular scenes, capable of representing multiple biological cells in atomistic detail. Our method is tailored for scenes, which are procedurally constructed, based on a given set of building rules. Rendering of large scenes normally requires the entire scene available in-core, or alternatively, it requires out-of-core management to load data into the memory hierarchy as a part of the rendering loop. Instead of out-of-core memory management, we propose to procedurally generate the scene on-demand on the fly. The key idea is a positional- and view-dependent procedural scene-construction strategy, where only a fraction of the atomistic scene around the camera is available in the GPU memory at any given time. The atomistic detail is populated into a uniform-space partitioning using a grid that covers the entire scene. Most of the grid cells are not filled with geometry, only those are populated that are potentially seen by the camera. The atomistic detail is populated in a compute shader and its representation is connected with acceleration data structures for hardware ray-tracing of modern GPUs. Objects which are far away, where atomistic detail is not perceivable from a given viewpoint, are represented by a triangle mesh mapped with a seamless texture, generated from the rendering of geometry from atomistic detail. The algorithm consists of two pipelines, the construction-compute pipeline, and the rendering pipeline, which work together to render molecular scenes at an atomistic resolution far beyond the limit of the GPU memory containing trillions of atoms. We demonstrate our technique on multiple models of SARS-CoV-2 and the red blood cell.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ruwayda Alharbi"],"doi":"10.1109/TVCG.2024.3411786","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Interactive rendering, view-guided scene construction, biological data, hardware ray tracing"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243411786","time_end":"","time_stamp":"","time_start":"","title":"\u201cNanomatrix: Scalable Construction of Crowded Biological Environments\u201d","uid":"v-tvcg-20243411786","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Generative text-to-image models, which allow users to create appealing images through a text prompt, have seen a dramatic increase in popularity in recent years. However, most users have a limited understanding of how such models work and often rely on trial and error strategies to achieve satisfactory results. The prompt history contains a wealth of information that could provide users with insights into what has been explored and how the prompt changes impact the output image, yet little research attention has been paid to the visual analysis of such process to support users. We propose the Image Variant Graph, a novel visual representation designed to support comparing prompt-image pairs and exploring the editing history. The Image Variant Graph models prompt differences as edges between corresponding images and presents the distances between images through projection. Based on the graph, we developed the PrompTHis system through co-design with artists. Based on the review and analysis of the prompting history, users can better understand the impact of prompt changes and have a more effective control of image generation. A quantitative user study and qualitative interviews demonstrate that PrompTHis can help users review the prompt history, make sense of the model, and plan their creative process.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuhan Guo"],"doi":"10.1109/TVCG.2024.3408255","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Text visualization, image visualization, text-to-image generation, editing history, provenance, generative art"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243408255","time_end":"","time_stamp":"","time_start":"","title":"PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation","uid":"v-tvcg-20243408255","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Information visualization uses various types of representations to encode data into graphical formats. Prior work on visualization techniques has evaluated the accuracy of perceived numerical data values from visual data encodings such as graphical position, length, orientation, size, and color. Our work aims to extend the research of graphical perception to the use of motion as data encodings for quantitative values. We present two experiments implementing multiple fundamental aspects of motion such as type, speed, and synchronicity that can be used for numerical value encoding as well as comparing motion to static visual encodings in terms of user perception and accuracy. We studied how well users can assess the differences between several types of motion and static visual encodings and present an updated ranking of accuracy for quantitative judgments. Our results indicate that non-synchronized motion can be interpreted more quickly and more accurately than synchronized motion. Moreover, our ranking of static and motion visual representations shows that motion, especially expansion and translational types, has great potential as a data encoding technique for quantitative value. Finally, we discuss the implications for the use of animation and motion for numerical representations in data visualization.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shaghayegh Esmaeili"],"doi":"10.1109/TVCG.2022.3193756","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Information visualization, animation and motion-related techniques, empirical study, graphical perception, evaluation."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20223193756","time_end":"","time_stamp":"","time_start":"","title":"Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding","uid":"v-tvcg-20223193756","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Point clouds are widely used as a versatile representation of 3D entities and scenes for all scale domains and in a variety of application areas, serving as a fundamental data category to directly convey spatial features. However, due to point sparsity, lack of structure, irregular distribution, and acquisition-related inaccuracies, results of point cloud visualization are often subject to visual complexity and ambiguity. In this regard, non-photorealistic rendering can improve visual communication by reducing the cognitive effort required to understand an image or scene and by directing attention to important features. In the last 20 years, this has been demonstrated by various non-photorealistic r rendering approaches that were proposed to target point clouds specifically. However, they do not use a common language or structure for assessment which complicates comparison and selection. Further, recent developments regarding point cloud characteristics and processing, such as massive data size or web-based rendering are rarely considered. To address these issues, we present a survey on non-photorealistic rendering approaches for point cloud visualization, providing an overview of the current state of research. We derive a structure for the assessment of approaches, proposing seven primary dimensions for the categorization regarding intended goals, data requirements, used techniques, and mode of operation. We then systematically assess corresponding approaches and utilize this classification to identify trends and research gaps, motivating future research in the development of effective non-photorealistic point cloud rendering methods.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ole Wegen"],"doi":"10.1109/TVCG.2024.3402610","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Point clouds, survey, non-photorealistic rendering"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243402610","time_end":"","time_stamp":"","time_start":"","title":"A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization","uid":"v-tvcg-20243402610","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"With the growing complexity and volume of data, visualizations have become more intricate, often requiring advanced techniques to convey insights. These complex charts are prevalent in everyday life, and individuals who lack knowledge in data visualization may find them challenging to understand. This paper investigates using Large Language Models (LLMs) to help users with low data literacy understand complex visualizations. While previous studies focus on text interactions with users, we noticed that visual cues are also critical for interpreting charts. We introduce an LLM application that supports both text and visual interaction for guiding chart interpretation. Our study with 26 participants revealed that the in-situ support effectively assisted users in interpreting charts and enhanced learning by addressing specific chart-related questions and encouraging further exploration. Visual communication allowed participants to convey their interests straightforwardly, eliminating the need for textual descriptions. However, the LLM assistance led users to engage less with the system, resulting in fewer insights from the visualizations. This suggests that users, particularly those with lower data literacy and motivation, may have over-relied on the LLM agent. We discuss opportunities for deploying LLMs to enhance visualization literacy while emphasizing the need for a balanced approach.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kiroong Choe"],"doi":"10.1109/TVCG.2024.3413195","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization literacy, Large language model, Visual communication"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243413195","time_end":"","time_stamp":"","time_start":"","title":"Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation","uid":"v-tvcg-20243413195","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data charts are prevalent across various fields due to their efficacy in conveying complex data relationships. However, static charts may sometimes struggle to engage readers and efficiently present intricate information, potentially resulting in limited understanding. We introduce \u201cLive Charts,\u201d a new format of presentation that decomposes complex information within a chart and explains the information pieces sequentially through rich animations and accompanying audio narration. We propose an automated approach to revive static charts into Live Charts. Our method integrates GNN-based techniques to analyze the chart components and extract data from charts. Then we adopt large natural language models to generate appropriate animated visuals along with a voice-over to produce Live Charts from static ones. We conducted a thorough evaluation of our approach, which involved the model performance, use cases, a crowd-sourced user study, and expert interviews. The results demonstrate Live Charts offer a multi-sensory experience where readers can follow the information and understand the data insights better. We analyze the benefits and drawbacks of Live Charts over static charts as a new information consumption experience.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lu Ying"],"doi":"10.1109/TVCG.2024.3397004","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Charts, storytelling, machine learning, automatic visualization"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243397004","time_end":"","time_stamp":"","time_start":"","title":"Reviving Static Charts into Live Charts","uid":"v-tvcg-20243397004","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualizing event timelines for collaborative text writing is an important application for navigating and understanding such data, as time passes and the size and complexity of both text and timeline increase. They are often employed by applications such as code repositories and collaborative text editors. In this paper, we present a visualization tool to explore historical records of writing of legislative texts, which were discussed and voted on by an assembly of representatives. Our visualization focuses on event timelines from text documents that involve multiple people and different topics, allowing for observation of different proposed versions of said text or tracking data provenance of given text sections, while highlighting the connections between all elements involved. We also describe the process of designing such a tool alongside domain experts, with three steps of evaluation being conducted to verify the effectiveness of our design.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alfie Abdul-Rahman"],"doi":"10.1109/TVCG.2024.3376406","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Collaboration, History, Humanities, Writing, Navigation, Metadata"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243376406","time_end":"","time_stamp":"","time_start":"","title":"Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records","uid":"v-tvcg-20243376406","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Creating an animated data video with audio narration is a time-consuming and complex task that requires expertise. It involves designing complex animations, turning written scripts into audio narrations, and synchronizing visual changes with the narrations. This paper presents WonderFlow, an interactive authoring tool, that facilitates narration-centric design of animated data videos. WonderFlow allows authors to easily specify semantic links between text and the corresponding chart elements. Then it automatically generates audio narration by leveraging text-to-speech techniques and aligns the narration with an animation. WonderFlow provides a structure-aware animation library designed to ease chart animation creation, enabling authors to apply pre-designed animation effects to common visualization components. Additionally, authors can preview and refine their data videos within the same system, without having to switch between different creation tools. A series of evaluation results confirmed that WonderFlow is easy to use and simplifies the creation of data videos with narration-animation interplay.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leixian Shen"],"doi":"10.1109/TVCG.2024.3411575","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data video, Data visualization, Narration-animation interplay, Storytelling, Authoring tool"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243411575","time_end":"","time_stamp":"","time_start":"","title":"WonderFlow: Narration-Centric Design of Animated Data Videos","uid":"v-tvcg-20243411575","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"As urban populations grow, effectively accessing urban performance measures such as livability and comfort becomes increasingly important due to their significant socioeconomic impacts. While Point of Interest (POI) data has been utilized for various applications in location-based services, its potential for urban performance analytics remains unexplored. In this paper, we present SenseMap, a novel approach for analyzing urban performance by leveraging POI data as a semantic representation of urban functions. We quantify the contribution of POIs to different urban performance measures by calculating semantic textual similarities on our constructed corpus. We propose Semantic-adaptive Kernel Density Estimation which takes into account POIs\u2019 in\ufb02uential areas across different Traf\ufb01c Analysis Zones and semantic contributions to generate semantic density maps for measures. We design and implement a feature-rich, real-time visual analytics system for users to explore the urban performance of their surroundings. Evaluations with human judgment and reference data demonstrate the feasibility and validity of our method. Usage scenarios and user studies demonstrate the capability, usability, and explainability of our system.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Juntong Chen"],"doi":"10.1109/TVCG.2023.3333356","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Urban data, semantic textual similarity, point of interest, density map, visual analytics, visualization design"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233333356","time_end":"","time_stamp":"","time_start":"","title":"SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity","uid":"v-tvcg-20233333356","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Impact dynamics are crucial for estimating the growth patterns of NFT projects by tracking the diffusion and decay of their relative appeal among stakeholders. Machine learning methods for impact dynamics analysis are incomprehensible and rigid in terms of their interpretability and transparency, whilst stakeholders require interactive tools for informed decision-making. Nevertheless, developing such a tool is challenging due to the substantial, heterogeneous NFT transaction data and the requirements for flexible, customized interactions. To this end, we integrate intuitive visualizations to unveil the impact dynamics of NFT projects. We first conduct a formative study and summarize analysis criteria, including substitution mechanisms, impact attributes, and design requirements from stakeholders. Next, we propose the Minimal Substitution Model to simulate substitutive systems of NFT projects that can be feasibly represented as node-link graphs. Particularly, we utilize attribute-aware techniques to embed the project status and stakeholder behaviors in the layout design. Accordingly, we develop a multi-view visual analytics system, namely NFTracer, allowing interactive analysis of impact dynamics in NFT transactions. We demonstrate the informativeness, effectiveness, and usability of NFTracer by performing two case studies with domain experts and one user study with stakeholders. The studies suggest that NFT projects featuring a higher degree of similarity are more likely to substitute each other. The impact of NFT projects within substitutive systems is contingent upon the degree of stakeholders\u2019 influx and projects\u2019 freshness.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yifan Cao"],"doi":"10.1109/TVCG.2024.3402834","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Stakeholders, Nonfungible Tokens, Social Networking Online, Visual Analytics, Network Analyzers, Measurement, Layout, Impact Dynamics Analysis, Non Fungible Tokens NF Ts, NFT Transaction Data, Substitutive Systems, Visual Analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243402834","time_end":"","time_stamp":"","time_start":"","title":"Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics","uid":"v-tvcg-20243402834","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visual analytics supports data analysis tasks within complex domain problems. However, due to the richness of data types, visual designs, and interaction designs, users need to recall and process a significant amount of information when they visually analyze data. These challenges emphasize the need for more intelligent visual analytics methods. Large language models have demonstrated the ability to interpret various forms of textual data, offering the potential to facilitate intelligent support for visual analytics. We propose LEVA, a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. To support onboarding, we use large language models to interpret visualization designs and view relationships based on system specifications. For exploration, we use large language models to recommend insights based on the analysis of system status and data to facilitate mixed-initiative exploration. For summarization, we present a selective reporting strategy to retrace analysis history through a stream visualization and generate insight reports with the help of large language models. We demonstrate how LEVA can be integrated into existing visual analytics systems. Two usage scenarios and a user study suggest that LEVA effectively aids users in conducting visual analytics.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuheng Zhao"],"doi":"10.1109/TVCG.2024.3368060","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Insight recommendation, mixed-initiative, interface agent, large language models, visual analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243368060","time_end":"","time_stamp":"","time_start":"","title":"LEVA: Using Large Language Models to Enhance Visual Analytics","uid":"v-tvcg-20243368060","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present V-Mail, a framework of cross-platform applications, interactive techniques, and communication protocols for improved multi-person correspondence about spatial 3D datasets. Inspired by the daily use of e-mail, V-Mail seeks to enable a similar style of rapid, multi-person communication accessible on any device; however, it aims to do this in the new context of spatial 3D communication, where limited access to 3D graphics hardware typically prevents such communication. The approach integrates visual data storytelling with data exploration, spatial annotations, and animated transitions. V-Mail ``data stories'' are exported in a standard video file format to establish a common baseline level of access on (almost) any device. The V-Mail framework also includes a series of complementary client applications and plugins that enable different degrees of story co-authoring and data exploration, adjusted automatically to match the capabilities of various devices. A lightweight, phone-based V-Mail app makes it possible to annotate data by adding captions to the video. These spatial annotations are then immediately accessible to team members running high-end 3D graphics visualization systems that also include a V-Mail client, implemented as a plugin. Results and evaluation from applying V-Mail to assist communication within an interdisciplinary science team studying Antarctic ice sheets confirm the utility of the asynchronous, cross-platform collaborative framework while also highlighting some current limitations and opportunities for future work.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jung Who Nam"],"doi":"10.1109/TVCG.2022.3229017","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Human-computer interaction, visualization of scientific 3D data, communication, storytelling, immersive analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20223229017","time_end":"","time_stamp":"","time_start":"","title":"V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices","uid":"v-tvcg-20223229017","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In recent years, narrative visualization has gained much attention. Researchers have proposed different design spaces for various narrative visualization genres and scenarios to facilitate the creation process. As users' needs grow and automation technologies advance, increasingly more tools have been designed and developed. In this study, we summarized six genres of narrative visualization (annotated charts, infographics, timelines & storylines, data comics, scrollytelling & slideshow, and data videos) based on previous research and four types of tools (design spaces, authoring tools, ML/AI-supported tools and ML/AI-generator tools) based on the intelligence and automation level of the tools. We surveyed 105 papers and tools to study how automation can progressively engage in visualization design and narrative processes to help users easily create narrative visualizations. This research aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.","accessible_pdf":false,"authors":[],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qing Chen"],"doi":"10.1109/TVCG.2023.3261320","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Automatic Visualization, Narrative Visualization, Design Space, Authoring Tools, Survey"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233261320","time_end":"","time_stamp":"","time_start":"","title":"How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools","uid":"v-tvcg-20233261320","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"TVCG","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"v-vr":{"event":"VR Invited Partnership Presentations","event_description":"","event_prefix":"v-vr","event_type":"invited","event_url":"","long_name":"VR Invited Partnership Presentations","organizers":[],"sessions":[]},"w-accessible":{"event":"1st Workshop on Accessible Data Visualization","event_description":"","event_prefix":"w-accessible","event_type":"workshop","event_url":"","long_name":"1st Workshop on Accessible Data Visualization","organizers":[],"sessions":[]},"w-beliv":{"event":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","event_description":"","event_prefix":"w-beliv","event_type":"workshop","event_url":"","long_name":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","organizers":[],"sessions":[]},"w-biomedvis":{"event":"Bio+Med+Vis Workshop","event_description":"","event_prefix":"w-biomedvis","event_type":"workshop","event_url":"","long_name":"Bio+Med+Vis Workshop","organizers":[],"sessions":[]},"w-eduvis":{"event":"EduVis: Workshop on Visualization Education, Literacy, and Activities","event_description":"","event_prefix":"w-eduvis","event_type":"workshop","event_url":"","long_name":"EduVis: Workshop on Visualization Education, Literacy, and Activities","organizers":[],"sessions":[]},"w-energyvis":{"event":"EnergyVis 2024: 4th Workshop on Energy Data Visualization","event_description":"","event_prefix":"w-energyvis","event_type":"workshop","event_url":"","long_name":"EnergyVis 2024: 4th Workshop on Energy Data Visualization","organizers":[],"sessions":[]},"w-firstperson":{"event":"First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities","event_description":"","event_prefix":"w-firstperson","event_type":"workshop","event_url":"","long_name":"First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities","organizers":[],"sessions":[]},"w-future":{"event":"VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation","event_description":"","event_prefix":"w-future","event_type":"workshop","event_url":"","long_name":"VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation","organizers":[],"sessions":[]},"w-nlviz":{"event":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","event_description":"","event_prefix":"w-nlviz","event_type":"workshop","event_url":"","long_name":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","organizers":[],"sessions":[]},"w-pdav":{"event":"Progressive Data Analysis and Visualization (PDAV) Workshop.","event_description":"","event_prefix":"w-pdav","event_type":"workshop","event_url":"","long_name":"Progressive Data Analysis and Visualization (PDAV) Workshop.","organizers":[],"sessions":[]},"w-storygenai":{"event":"Workshop on Data Storytelling in an Era of Generative AI","event_description":"","event_prefix":"w-storygenai","event_type":"workshop","event_url":"","long_name":"Workshop on Data Storytelling in an Era of Generative AI","organizers":[],"sessions":[]},"w-topoinvis":{"event":"TopoInVis: Workshop on Topological Data Analysis and Visualization","event_description":"","event_prefix":"w-topoinvis","event_type":"workshop","event_url":"","long_name":"TopoInVis: Workshop on Topological Data Analysis and Visualization","organizers":[],"sessions":[]},"w-uncertainty":{"event":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","event_description":"","event_prefix":"w-uncertainty","event_type":"workshop","event_url":"","long_name":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","organizers":[],"sessions":[]},"w-vis4climate":{"event":"Visualization for Climate Action and Sustainability","event_description":"","event_prefix":"w-vis4climate","event_type":"workshop","event_url":"","long_name":"Visualization for Climate Action and Sustainability","organizers":[],"sessions":[]},"w-visxai":{"event":"VISxAI: 7th Workshop on Visualization for AI Explainability","event_description":"","event_prefix":"w-visxai","event_type":"workshop","event_url":"","long_name":"VISxAI: 7th Workshop on Visualization for AI Explainability","organizers":[],"sessions":[]}} +{"a-biomedchallenge":{"event":"Bio+MedVis Challenges","event_description":"","event_prefix":"a-biomedchallenge","event_type":"associated","event_url":"","long_name":"Bio+MedVis Challenges","organizers":[],"sessions":[]},"a-ldav":{"event":"LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization","event_description":"","event_prefix":"a-ldav","event_type":"associated","event_url":"","long_name":"LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"a-ldav","ff_link":"","session_id":"a-ldav0","session_image":"a-ldav0.png","time_end":"","time_slots":[{"abstract":"Cuneiform is the earliest known system of writing, first developed for the Sumerian language of southern Mesopotamia in the second half of the 4th millennium BC. Cuneiform signs are obtained by impressing a stylus on fresh clay tablets. For certain purposes, e.g. authentication by seal imprint, some cuneiform tablets were enclosed in clay envelopes, which cannot be opened without destroying them. The aim of our interdisciplinary project is the non-invasive study of clay tablets. A portable X-ray micro-CT scanner is developed to acquire density data of such artifacts on a high-resolution, regular 3D grid at collection sites. The resulting volume data is processed through feature-preserving denoising, extraction of high-accuracy surfaces using a manifold dual marching cubes algorithm and extraction of local features by enhanced curvature rendering and ambient occlusion. For the non-invasive study of cuneiform inscriptions, the tablet is virtually separated from its envelope by curvature-based segmentation. The computational- and data-intensive algorithms are optimized for near-real-time offline usage with limited resources at collection sites. To visualize the complexity-reduced and octree-based compressed representation of surfaces, we develop and implement an interactive application. To facilitate the analysis of such clay tablets, we implement shape-based feature extraction algorithms to enhance cuneiform recognition. Our workflow supports innovative 3D display and interaction techniques such as autostereoscopic displays and gesture control.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"stephan.olbrich@uni-hamburg.de","is_corresponding":true,"name":"Stephan Olbrich"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"andreas.beckert@uni-hamburg.de","is_corresponding":false,"name":"Andreas Beckert"},{"affiliations":["Centre National de la Recherche Scientifique (CNRS), Nanterre, France"],"email":"cecile.michel@cnrs.fr","is_corresponding":false,"name":"C\u00e9cile Michel"},{"affiliations":["Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany","Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"christian.schroer@desy.de","is_corresponding":false,"name":"Christian Schroer"},{"affiliations":["Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany","Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"samaneh.ehteram@desy.de","is_corresponding":false,"name":"Samaneh Ehteram"},{"affiliations":["Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany"],"email":"andreas.schropp@desy.de","is_corresponding":false,"name":"Andreas Schropp"},{"affiliations":["Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany"],"email":"philipp.paetzold@desy.de","is_corresponding":false,"name":"Philipp Paetzold"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Stephan Olbrich"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"a-ldav0","slot_id":"a-ldav-1002","time_end":"","time_stamp":"","time_start":"","title":"Efficient Analysis and Visualization of High-Resolution Computed Tomography Data for the Exploration of Enclosed Cuneiform Tablets","uid":"a-ldav-1002","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Dimensionality reduction (DR) is a well-established approach for the visualization of high-dimensional data sets. While DR methods are often applied to typical DR benchmark data sets in the literature, they might suffer from high runtime complexity and memory requirements, making them unsuitable for large data visualization especially in environments outside of high-performance computing. To perform DR on large data sets, we propose the use of out-of-sample extensions. Such extensions allow inserting new data into existing projections, which we leverage to iteratively project data into a reference projection that consists only of a small manageable subset. This process makes it possible to perform DR out-of-core on large data, which would otherwise not be possible due to memory and runtime limitations. For metric multidimensional scaling (MDS), we contribute an implementation with out-of-sample projection capability since typical software libraries do not support it. We provide an evaluation of the projection quality of five common DR algorithms (MDS, PCA, t-SNE, UMAP, and autoencoders) using quality metrics from the literature and analyze the trade-off between the size of the reference set and projection quality. The runtime behavior of the algorithms is also quantified with respect to reference set size, out-of-sample batch size, and dimensionality of the data sets. Furthermore, we compare the out-of-sample approach to other recently introduced DR methods, such as PaCMAP and TriMAP, which claim to handle larger data sets than traditional approaches. To showcase the usefulness of DR on this large scale, we contribute a use case where we analyze ensembles of streamlines amounting to one billion projected instances.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e4t Stuttgart, Stuttgart, Germany"],"email":"lucareichmann01@gmail.com","is_corresponding":false,"name":"Luca Marcel Reichmann"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"david.haegele@visus.uni-stuttgart.de","is_corresponding":true,"name":"David H\u00e4gele"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["David H\u00e4gele"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"a-ldav0","slot_id":"a-ldav-1003","time_end":"","time_stamp":"","time_start":"","title":"Out-of-Core Dimensionality Reduction for Large Data via Out-of-Sample Extensions","uid":"a-ldav-1003","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Scientists generate petabytes of data daily to help uncover environmental trends or behaviors that are hard to predict. For example, understanding climate simulations based on the long-term average of temperature, precipitation, and other environmental variables is essential to predicting and establishing root causes of future undesirable scenarios and assessing possible mitigation strategies. Unfortunately, bottlenecks in petascale workflows restrict scientists' ability to analyze and visualize the necessary information due to requirements for extensive computational resources, obstacles in data accessibility, and inefficient analysis algorithms. This paper presents an approach to managing, visualizing, and analyzing petabytes of data within a browser on equipment ranging from the top NASA supercomputer to commodity hardware like a laptop. Our approach is based on a novel data fabric abstraction layer that allows querying scientific information in a form that is user-friendly while hiding the complexities of dealing with file systems or cloud services. We also optimize network utilization while streaming from petascale repositories through state-of-the-art progressive compression algorithms. Based on this abstraction, we provide customizable dashboards that can be accessed from any device with an internet connection, offering straightforward access to vast amounts of data typically not available to those without access to uniquely expensive hardware resources. Our dashboards provide and improve the ability to access and, more importantly, use massive data for a wide range of users, from top scientists with access to leadership-class computing environments to undergraduate students of disadvantaged backgrounds from minority-serving institutions. We focus on NASA's use of petascale climate datasets as an example of particular societal impact and, therefore, a case where achieving equity in science participation is critical. In particular, we validate our approach by improving the ability of climate scientist to explore their data even on the top NASA supercomputer, introducing the ability to study their data in a fully interactive environment instead of being limited to using pre-choreographed videos that can take days to generate each. We also successfully introduced the same dashboards and simplified training material in an undergraduate class on Geospatial Analysis in a minority-serving campus (Utah State Banding) with 69% of the Native American students and 86% being low-income. The same dashboards are also released in simplified form to the general public, providing an unparalleled democratization for the access and use of climate data that can be extended to most scientific domains.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"aashishpanta0@gmail.com","is_corresponding":true,"name":"Aashish Panta"},{"affiliations":["Scientific Computing and Imaging Institute, Salt Lake City, United States"],"email":"xuanhuang@sci.utah.edu","is_corresponding":false,"name":"Xuan Huang"},{"affiliations":["NASA Ames Research Center, Mountain View, United States"],"email":"nina.mccurdy@gmail.com","is_corresponding":false,"name":"Nina McCurdy"},{"affiliations":["NASA, mountain View, United States"],"email":"david.ellsworth@nasa.gov","is_corresponding":false,"name":"David Ellsworth"},{"affiliations":["university of Utah, Salt lake city, United States"],"email":"amy.a.gooch@gmail.com","is_corresponding":false,"name":"Amy Gooch"},{"affiliations":["university of Utah, Salt lake city, United States"],"email":"scrgiorgio@gmail.com","is_corresponding":false,"name":"Giorgio Scorzelli"},{"affiliations":["NASA, Pasadena, United States"],"email":"hector.torres.gutierrez@jpl.nasa.gov","is_corresponding":false,"name":"Hector Torres"},{"affiliations":["caltech, Pasadena, United States"],"email":"pklein@caltech.edu","is_corresponding":false,"name":"Patrice Klein"},{"affiliations":["Utah State University Blanding, Blanding, United States"],"email":"gustavo.ovando@usu.edu","is_corresponding":false,"name":"Gustavo Ovando-Montejo"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"pascucci.valerio@gmail.com","is_corresponding":false,"name":"Valerio Pascucci"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Aashish Panta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"a-ldav0","slot_id":"a-ldav-1006","time_end":"","time_stamp":"","time_start":"","title":"Web-based Visualization and Analytics of Petascale data: Equity as a Tide that Lifts All Boats","uid":"a-ldav-1006","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper describes the adaptation of a well-scaling parallel algorithm for computing Morse-Smale segmentations based on path compression to a distributed computational setting. Additionally, we extend the algorithm to efficiently compute connected components in distributed structured and unstructured grids, based either on the connectivity of the underlying mesh or a feature mask. Our implementation is seamlessly integrated with the distributed extension of the Topology ToolKit (TTK), ensuring robust performance and scalability. To demonstrate the practicality and efficiency of our algorithms, we conducted a series of scaling experiments on large-scale datasets, with sizes of up to 4096^3 vertices on up to 64 nodes and 768 cores.","accessible_pdf":false,"authors":[{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"mswill@rhrk.uni-kl.de","is_corresponding":true,"name":"Michael Will"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"jl@jluk.de","is_corresponding":false,"name":"Jonas Lukasczyk"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"garth@rptu.de","is_corresponding":false,"name":"Christoph Garth"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Will"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"a-ldav0","slot_id":"a-ldav-1011","time_end":"","time_stamp":"","time_start":"","title":"Distributed Path Compression for Piecewise Linear Morse-Smale Segmentations and Connected Components","uid":"a-ldav-1011","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We propose and discuss a paradigm that allows for expressing data- parallel rendering with the classically non-parallel ANARI API. We propose this as a new standard for data-parallel rendering, describe two different implementations of this paradigm, and use multiple sample integrations into existing applications to show how easy it is to adopt, and what can be gained from doing so.","accessible_pdf":false,"authors":[{"affiliations":["NVIDIA, Salt Lake City, United States"],"email":"ingowald@gmail.com","is_corresponding":false,"name":"Ingo Wald"},{"affiliations":["University of Cologne, Cologne, Germany"],"email":"zellmann@uni-koeln.de","is_corresponding":true,"name":"Stefan Zellmann"},{"affiliations":["NVIDIA, Austin, United States"],"email":"jeffamstutz@gmail.com","is_corresponding":false,"name":"Jefferson Amstutz"},{"affiliations":["University of California, Davis, Davis, United States"],"email":"qadwu@ucdavis.edu","is_corresponding":false,"name":"Qi Wu"},{"affiliations":["NVIDIA, Santa Clara, United States"],"email":"kgriffin@nvidia.com","is_corresponding":false,"name":"Kevin Shawn Griffin"},{"affiliations":["VSB - Technical University of Ostrava, Ostrava, Czech Republic"],"email":"milan.jaros@vsb.cz","is_corresponding":false,"name":"Milan Jaro\u0161"},{"affiliations":["University of Cologne, Cologne, Germany"],"email":"wesner@uni-koeln.de","is_corresponding":false,"name":"Stefan Wesner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Stefan Zellmann"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"a-ldav0","slot_id":"a-ldav-1016","time_end":"","time_stamp":"","time_start":"","title":"Standardized Data-Parallel Rendering Using ANARI","uid":"a-ldav-1016","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Functional approximation as a high-order continuous representation provides a more accurate value and gradient query compared to the traditional discrete volume representation. Volume visualization directly rendered from functional approximation generates high-quality rendering results without high-order artifacts caused by trilinear interpolations. However, querying an encoded functional approximation is computationally expensive, especially when the input dataset is large, making functional approximation impractical for interactive visualization. In this paper, we proposed a novel functional approximation multi-resolution representation, Adaptive-FAM, which is lightweight and fast to query. We also design a GPU-accelerated out-of-core multi-resolution volume visualization framework that directly utilizes the Adaptive-FAM representation to generate high-quality rendering with interactive responsiveness. Our method can not only dramatically decrease the caching time, one of the main contributors to input latency, but also effectively improve the cache hit rate through prefetching. Our approach significantly outperforms the traditional function approximation method in terms of input latency while maintaining comparable rendering quality.","accessible_pdf":false,"authors":[{"affiliations":["University of Nebraska-Lincoln, Lincoln, United States"],"email":"jianxin.sun@huskers.unl.edu","is_corresponding":true,"name":"Jianxin Sun"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"dlenz@anl.gov","is_corresponding":false,"name":"David Lenz"},{"affiliations":["University of Nebraska-Lincoln, Lincoln, United States"],"email":"yu@cse.unl.edu","is_corresponding":false,"name":"Hongfeng Yu"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"tpeterka@mcs.anl.gov","is_corresponding":false,"name":"Tom Peterka"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jianxin Sun"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"a-ldav0","slot_id":"a-ldav-1018","time_end":"","time_stamp":"","time_start":"","title":"Adaptive Multi-Resolution Encoding for Interactive Large-Scale Volume Visualization through Functional Approximation","uid":"a-ldav-1018","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"LDAV","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"a-scivis-contest":{"event":"SciVis Contest","event_description":"","event_prefix":"a-scivis-contest","event_type":"associated","event_url":"","long_name":"SciVis Contest","organizers":[],"sessions":[]},"a-visap":{"event":"VIS Arts Program","event_description":"","event_prefix":"a-visap","event_type":"visap","event_url":"","long_name":"VIS Arts Program","organizers":[],"sessions":[]},"a-visinpractice":{"event":"VisInPractice","event_description":"","event_prefix":"a-visinpractice","event_type":"associated","event_url":"","long_name":"VisInPractice","organizers":[],"sessions":[]},"a-vizsec":{"event":"VizSec","event_description":"","event_prefix":"a-vizsec","event_type":"associated","event_url":"","long_name":"VizSec","organizers":[],"sessions":[]},"conf":{"event":"Conference Events","event_description":"","event_prefix":"conf","event_type":"vis","event_url":"","long_name":"Conference Events","organizers":[],"sessions":[]},"s-vds":{"event":"VDS: Visualization in Data Science Symposium","event_description":"","event_prefix":"s-vds","event_type":"associated","event_url":"","long_name":"VDS: Visualization in Data Science Symposium","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"s-vds","ff_link":"","session_id":"s-vds0","session_image":"s-vds0.png","time_end":"","time_slots":[{"abstract":"Efficient public transport systems are crucial for sustainable urban development as cities face increasing mobility demands. Yet, many public transport networks struggle to meet diverse user needs due to historical development, urban constraints, and financial limitations. Traditionally, planning of transport network structure is often based on limited surveys, expert opinions, or partial usage statistics. This provides an incomplete basis for decision-making. We introduce an data-driven approach to public transport planning and optimization, calculating detailed accessibility measures at the individual housing level. Our visual analytics workflow combines population-group-based simulations with dynamic infrastructure analysis, utilizing a scenario-based model to simulate daily travel patterns of varied demographic groups, including schoolchildren, students, workers, and pensioners. These population groups, each with unique mobility requirements and routines, interact with the transport system under different scenarios traveling to and from Points of Interest (POI), assessed through travel time calculations. Results are visualized through heatmaps, density maps, and network overlays, as well as detailed statistics. Our system allows us to analyze both the underlying data and simulation results on multiple levels of granularity, delivering both broad insights and granular details. Case studies with the city of Konstanz, Germany reveal key areas where public transport does not meet specific needs, confirmed through a formative user study. Due to the high cost of changing legacy networks, our analysis facilitates the identification of strategic enhancements, such as optimized schedules or rerouting, and few targeted stop relocations, highlighting consequential variations in accessibility to pinpointing critical service gaps. Our research advances urban transport analytics by providing policymakers and citizens with a system that delivers both broad insights with granular detail into public transport services for a data-driven quality assessment at housing-level detail.","accessible_pdf":false,"authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"yannick.metz@uni-konstanz.de","is_corresponding":false,"name":"Yannick Metz"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"dennis-fabian.ackermann@uni-konstanz.de","is_corresponding":false,"name":"Dennis Ackermann"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"max.fischer@uni-konstanz.de","is_corresponding":true,"name":"Maximilian T. Fischer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maximilian T. Fischer"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"s-vds0","slot_id":"s-vds-1000","time_end":"","time_stamp":"","time_start":"","title":"Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent","uid":"s-vds-1000","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This position paper explores the interplay between automation and human involvement in data science. It synthesizes perspectives from Automated Data Science (AutoDS) and Interactive Data Visualization (VIS), which traditionally represent opposing ends of the human-machine spectrum. While AutoDS aims to enhance efficiency by reducing human tasks, VIS emphasizes the importance of nuanced understanding, innovation, and context provided by human involvement. This paper examines these dichotomies through an online survey and advocates for a balanced approach that harmonizes the efficiency of automation with the irreplaceable insights of human expertise. Ultimately, we address the essential question of not just what we can automate, but what we should automate, seeking strategies that prioritize technological advancement alongside the fundamental need for human oversight.","accessible_pdf":false,"authors":[{"affiliations":["Tufts University, Boston, United States"],"email":"jen@cs.tufts.edu","is_corresponding":true,"name":"Jen Rogers"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"mehdi.chakhchoukh@universite-paris-saclay.fr","is_corresponding":false,"name":"Mehdi Chakhchoukh"},{"affiliations":["Leiden Universiteit, Leiden, Netherlands"],"email":"anastacio@aim.rwth-aachen.de","is_corresponding":false,"name":"Marie Anastacio"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":false,"name":"Rebecca Faust"},{"affiliations":["University of Warwick, Coventry, United Kingdom"],"email":"cagatay.turkay@warwick.ac.uk","is_corresponding":false,"name":"Cagatay Turkay"},{"affiliations":["University of Wyoming, Laramie, United States"],"email":"larsko@uwyo.edu","is_corresponding":false,"name":"Lars Kotthoff"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"steffen.koch@vis.uni-stuttgart.de","is_corresponding":false,"name":"Steffen Koch"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"andreas.kerren@liu.se","is_corresponding":false,"name":"Andreas Kerren"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"bernard@ifi.uzh.ch","is_corresponding":false,"name":"J\u00fcrgen Bernard"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jen Rogers"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"s-vds0","slot_id":"s-vds-1002","time_end":"","time_stamp":"","time_start":"","title":"Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop","uid":"s-vds-1002","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Categorical data does not have an intrinsic definition of distance or order, and therefore, established visualization techniques for categorical data only allow for a set-based or frequency-based analysis, e.g., through Euler diagrams or Parallel Sets, and do not support a similarity-based analysis. We present a novel dimensionality reduction-based visualization for categorical data, which is based on defining the distance of two data items as the number of varying attributes. Our technique enables users to pre-attentively detect groups of similar data items and observe the properties of the projection, such as attributes strongly influencing the embedding. Our prototype visually encodes data properties in an enhanced scatterplot-like visualization, visualizing attributes in the background to show the distribution of categories. In addition, we propose two graph-based measures to quantify the plot's visual quality, which rank attributes according to their contribution to cluster cohesion. To demonstrate the capabilities of our similarity-based projection method, we compare it to Euler diagrams and Parallel Sets regarding visual scalability and evaluate it quantitatively on seven real-world datasets using a range of common quality measures. Further, we validate the benefits of our approach through an expert study with five data scientists analyzing the Titanic and Mushroom dataset with up to 23 attributes and 8124 category combinations. Our results indicate that our Categorical Data Map offers an effective analysis method for large datasets with a high number of category combinations.","accessible_pdf":false,"authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"frederik.dennig@uni-konstanz.de","is_corresponding":true,"name":"Frederik L. Dennig"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"lucas.joos@uni-konstanz.de","is_corresponding":false,"name":"Lucas Joos"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"patrick.paetzold@uni-konstanz.de","is_corresponding":false,"name":"Patrick Paetzold"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"blumbergdaniela@gmail.com","is_corresponding":false,"name":"Daniela Blumberg"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"oliver.deussen@uni-konstanz.de","is_corresponding":false,"name":"Oliver Deussen"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"max.fischer@uni-konstanz.de","is_corresponding":false,"name":"Maximilian T. Fischer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Frederik L. Dennig"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"s-vds0","slot_id":"s-vds-1007","time_end":"","time_stamp":"","time_start":"","title":"The Categorical Data Map: A Multidimensional Scaling-Based Approach","uid":"s-vds-1007","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Clustering is an essential technique across various domains, such as data science, machine learning, and eXplainable Artificial Intelligence. Information visualization and visual analytics techniques have been proven to effectively support human involvement in the visual exploration of clustered data to enhance the understanding and refinement of cluster assignments. This paper presents an attempt of a deep and exhaustive evaluation of the perceptive aspects of clustering quality metrics, focusing on the Davies-Bouldin Index, Dunn Index, Calinski-Harabasz Index, and Silhouette Score. Our research is centered around two main objectives: a) assessing the human perception of common CVIs in 2D scatterplots and b) exploring the potential of Large Language Models (LLMs), in particular GPT-4o, to emulate the assessed human perception. By discussing the obtained results, highlighting limitations, and areas for further exploration, this paper aims to propose a foundation for future research activities.","accessible_pdf":false,"authors":[{"affiliations":["Sapienza University of Rome, Rome, Italy"],"email":"blasilli@diag.uniroma1.it","is_corresponding":true,"name":"Graziano Blasilli"},{"affiliations":["Northeastern University, Boston, United States"],"email":"kerrigan.d@northeastern.edu","is_corresponding":false,"name":"Daniel Kerrigan"},{"affiliations":["Northeastern University, Boston, United States"],"email":"e.bertini@northeastern.edu","is_corresponding":false,"name":"Enrico Bertini"},{"affiliations":["Sapienza University of Rome, Rome, Italy"],"email":"santucci@diag.uniroma1.it","is_corresponding":false,"name":"Giuseppe Santucci"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Graziano Blasilli"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"s-vds0","slot_id":"s-vds-1013","time_end":"","time_stamp":"","time_start":"","title":"Towards a Visual Perception-Based Analysis of Clustering Quality Metrics","uid":"s-vds-1013","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Recommender systems have become integral to digital experiences, shaping user interactions and preferences across various platforms. Despite their widespread use, these systems often suffer from algorithmic biases that can lead to unfair and unsatisfactory user experiences. This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems. By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration, stereotypes, and filter bubbles affect their recommendations. Informed by in-depth user interviews, both general users and researchers can benefit from increased transparency and personalized impact assessments, ultimately fostering a better understanding of algorithmic biases and contributing to more equitable recommendation outcomes. This work provides valuable insights for future research and practical applications in mitigating bias and enhancing fairness in machine learning algorithms.","accessible_pdf":false,"authors":[{"affiliations":["University of Pittsburgh, Pittsburgh, United States"],"email":"yongsu.ahn@pitt.edu","is_corresponding":true,"name":"Yongsu Ahn"},{"affiliations":["School of Computing and Information, University of Pittsburgh, Pittsburgh, United States"],"email":"quinnkwolter@gmail.com","is_corresponding":false,"name":"Quinn K Wolter"},{"affiliations":["Quest Diagnostics, Pittsburgh, United States"],"email":"jonilyndick@gmail.com","is_corresponding":false,"name":"Jonilyn Dick"},{"affiliations":["Quest Diagnostics, Pittsburgh, United States"],"email":"janetad99@gmail.com","is_corresponding":false,"name":"Janet Dick"},{"affiliations":["University of Pittsburgh, Pittsburgh, United States"],"email":"yurulin@pitt.edu","is_corresponding":false,"name":"Yu-Ru Lin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yongsu Ahn"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"s-vds0","slot_id":"s-vds-1021","time_end":"","time_stamp":"","time_start":"","title":"Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems","uid":"s-vds-1021","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This position paper discusses the profound impact of Large Language Models (LLMs) on semantic change, emphasizing the need for comprehensive monitoring and visualization techniques. Building on established concepts from linguistics, we examine the interdependency between mental and language models, discussing how LLMs influence and are influenced by human cognition and societal context. We introduce three primary theories to conceptualize such influences: Recontextualization, Standardization, and Semantic Dementia, illustrating how LLMs drive, standardize, and potentially degrade language semantics. Our subsequent review categorizes methods for visualizing semantic change into frequency-based, embedding-based, and context-based techniques, being first in assessing their effectiveness in capturing linguistic evolution: Embedding-based methods are highlighted as crucial for a detailed semantic analysis, reflecting both broad trends and specific linguistic changes. We underscore the need for novel visual, interactive tools to monitor and explain semantic changes induced by LLMs, ensuring the preservation of linguistic diversity and mitigating linguistic biases. This work provides essential insights for future research on semantic change visualization and the dynamic nature of language evolution in the times of LLMs.","accessible_pdf":false,"authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"raphael.buchmueller@uni-konstanz.de","is_corresponding":true,"name":"Raphael Buchm\u00fcller"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"friederike.koerte@uni-konstanz.de","is_corresponding":false,"name":"Friederike K\u00f6rte"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Raphael Buchm\u00fcller"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"associated","presentation_mode":"","session_id":"s-vds0","slot_id":"s-vds-1029","time_end":"","time_stamp":"","time_start":"","title":"Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs","uid":"s-vds-1029","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"VDS","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"t-analysis":{"event":"Visualization Analysis and Design","event_description":"","event_prefix":"t-analysis","event_type":"tutorial","event_url":"","long_name":"Visualization Analysis and Design","organizers":[],"sessions":[]},"t-color":{"event":"Generating Color Schemes for our Data Visualizations","event_description":"","event_prefix":"t-color","event_type":"tutorial","event_url":"","long_name":"Generating Color Schemes for our Data Visualizations","organizers":[],"sessions":[]},"t-immersive":{"event":"Developing Immersive and Collaborative Visualizations with Web Technologies","event_description":"","event_prefix":"t-immersive","event_type":"tutorial","event_url":"","long_name":"Developing Immersive and Collaborative Visualizations with Web Technologies","organizers":[],"sessions":[]},"t-llm4vis":{"event":"LLM4Vis: Large Language Models for Information Visualization","event_description":"","event_prefix":"t-llm4vis","event_type":"tutorial","event_url":"","long_name":"LLM4Vis: Large Language Models for Information Visualization","organizers":[],"sessions":[]},"t-nationalscience":{"event":"Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis","event_description":"","event_prefix":"t-nationalscience","event_type":"tutorial","event_url":"","long_name":"Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis","organizers":[],"sessions":[]},"t-participatory":{"event":"Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations","event_description":"","event_prefix":"t-participatory","event_type":"tutorial","event_url":"","long_name":"Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations","organizers":[],"sessions":[]},"t-revisit":{"event":"Running Online User Studies with the reVISit Framework","event_description":"","event_prefix":"t-revisit","event_type":"tutorial","event_url":"","long_name":"Running Online User Studies with the reVISit Framework","organizers":[],"sessions":[]},"v-cga":{"event":"CG&A Invited Partnership Presentations","event_description":"","event_prefix":"v-cga","event_type":"invited","event_url":"","long_name":"CG&A Invited Partnership Presentations","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"v-cga","ff_link":"","session_id":"cga0","session_image":"cga0.png","time_end":"","time_slots":[{"abstract":"We consider the general problem known as job shop scheduling, in which multiple jobs consist of sequential operations that need to be executed or served by appropriate machines having limited capacities. For example, train journeys (jobs) consist of moves and stops (operations) to be served by rail tracks and stations (machines). A schedule is an assignment of the job operations to machines and times where and when they will be executed. The developers of computational methods for job scheduling need tools enabling them to explore how their methods work. At a high level of generality, we define the system of pertinent exploration tasks and a combination of visualizations capable of supporting the tasks. We provide general descriptions of the purposes, contents, visual encoding, properties, and interactive facilities of the visualizations and illustrate them with images from an example implementation in air traffic management. We justify the design of the visualizations based on the tasks, principles of creating visualizations for pattern discovery, and scalability requirements. The outcomes of our research are sufficiently general to be of use in a variety of applications.","accessible_pdf":false,"authors":[{"affiliations":"","email":"gennady.andrienko@iais.fraunhofer.de","is_corresponding":true,"name":"Gennady Andrienko"},{"affiliations":"","email":"natalia.andrienko@iais.fraunhofer.de","is_corresponding":false,"name":"Natalia Andrienko"},{"affiliations":"","email":"jmcordero@e-crida.enaire.es","is_corresponding":false,"name":"Jose Manuel Cordero Garcia"},{"affiliations":"","email":"dirk.hecker@iais.fraunhofer.de","is_corresponding":false,"name":"Dirk Hecker"},{"affiliations":"","email":"georgev@unipi.gr","is_corresponding":false,"name":"George A. Vouros"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gennady Andrienko"],"doi":"10.1109/MCG.2022.3163437","external_paper_link":"","fno":"9745375","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, Schedules, Task Analysis, Optimization, Job Shop Scheduling, Data Analysis, Processor Scheduling, Iterative Methods"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-9745375","time_end":"","time_stamp":"","time_start":"","title":"Supporting Visual Exploration of Iterative Job Scheduling","uid":"v-cga-9745375","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The number of online news articles available nowadays is rapidly increasing. When exploring articles on online news portals, navigation is mostly limited to the most recent ones. The spatial context and the history of topics are not immediately accessible. To support readers in the exploration or research of articles in large datasets, we developed an interactive 3D globe visualization. We worked with datasets from multiple online news portals containing up to 45,000 articles. Using agglomerative hierarchical clustering, we represent the referenced locations of news articles on a globe with different levels of detail. We employ two interaction schemes for navigating the viewpoint on the visualization, including support for hand-held devices and desktop PCs, and provide search functionality and interactive filtering. Based on this framework, we explore additional modules for jointly exploring the spatial and temporal domain of the dataset and incorporating live news into the visualization.","accessible_pdf":false,"authors":[{"affiliations":"","email":"nicholas.ingulfsen@gmail.com","is_corresponding":false,"name":"Nicholas Ingulfsen"},{"affiliations":"","email":"simone.schaub@visinf.tu-darmstadt.de","is_corresponding":false,"name":"Simone Schaub-Meyer"},{"affiliations":"","email":"grossm@inf.ethz.ch","is_corresponding":false,"name":"Markus Gross"},{"affiliations":"","email":"tobias.guenther@fau.de","is_corresponding":true,"name":"Tobias G\u00fcnther"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias G\u00fcnther"],"doi":"10.1109/MCG.2021.3127434","external_paper_link":"","fno":"9612019","has_image":false,"has_pdf":false,"image_caption":"","keywords":["News Articles, Number Of Articles, Headlines, Interactive Visualization, Online News, Agglomerative Clustering, Local News, Interactive Exploration, Desktop PC, Different Levels Of Detail, News Portals, Spatial Information, User Study, 3D Space, Human-computer Interaction, Temporal Information, Third Dimension, Tablet Computer, Pie Chart, News Stories, 3D Visualization, Article Details, Visual Point, Bottom Of The Screen, Geospatial Data, Type Of Visualization, Largest Dataset, Tagging Location, Live Feed"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-9612019","time_end":"","time_stamp":"","time_start":"","title":"News Globe: Visualization of Geolocalized News Articles","uid":"v-cga-9612019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In many applications, developed deep-learning models need to be iteratively debugged and refined to improve the model efficiency over time. Debugging some models, such as temporal multilabel classification (TMLC) where each data point can simultaneously belong to multiple classes, can be especially more challenging due to the complexity of the analysis and instances that need to be reviewed. In this article, focusing on video activity recognition as an application of TMLC, we propose DETOXER, an interactive visual debugging system to support finding different error types and scopes through providing multiscope explanations.","accessible_pdf":false,"authors":[{"affiliations":"","email":"m.nourani@northeastern.edu","is_corresponding":true,"name":"Mahsan Nourani"},{"affiliations":"","email":"chiradeep.roy@utdallas.edu","is_corresponding":false,"name":"Chiradeep Roy"},{"affiliations":"","email":"dhoneycutt@ufl.edu","is_corresponding":false,"name":"Donald R. Honeycutt"},{"affiliations":"","email":"eragan@ufl.edu","is_corresponding":false,"name":"Eric D. Ragan"},{"affiliations":"","email":"vibhav.gogate@utdallas.edu","is_corresponding":false,"name":"Vibhav Gogate"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mahsan Nourani"],"doi":"10.1109/MCG.2022.3201465","external_paper_link":"","fno":"9866547","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Debugging, Analytical Models, Heating Systems, Data Models, Computational Modeling, Activity Recognition, Deep Learning, Multi Label Classification, Visualization Tool, Temporal Classification, Visual Debugging, False Positive, False Negative, Active Components, Deep Learning Models, Types Of Errors, Video Frames, Error Detection, Detection Of Types, Action Recognition, Interactive Visualization, Sequence Of Points, Design Goals, Positive Errors, Critical Outcomes, Error Patterns, Global Panel, False Negative Rate, False Positive Rate, Heatmap, Visual Approach, Truth Labels, True Positive, Confidence Score, Anomaly Detection, Interface Elements"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-9866547","time_end":"","time_stamp":"","time_start":"","title":"DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification","uid":"v-cga-9866547","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The Internet of Food (IoF) is an emerging field in smart foodsheds, involving the creation of a knowledge graph (KG) about the environment, agriculture, food, diet, and health. However, the heterogeneity and size of the KG present challenges for downstream tasks, such as information retrieval and interactive exploration. To address those challenges, we propose an interactive knowledge and learning environment (IKLE) that integrates three programming and modeling languages to support multiple downstream tasks in the analysis pipeline. To make IKLE easier to use, we have developed algorithms to automate the generation of each language. In addition, we collaborated with domain experts to design and develop a dataflow visualization system, which embeds the automatic language generations into components and allows users to build their analysis pipeline by dragging and connecting components of interest. We have demonstrated the effectiveness of IKLE through three real-world case studies in smart foodsheds.","accessible_pdf":false,"authors":[{"affiliations":"","email":"tu.253@osu.edu","is_corresponding":true,"name":"Yamei Tu"},{"affiliations":"","email":"wang.5502@osu.edu","is_corresponding":false,"name":"Xiaoqi Wang"},{"affiliations":"","email":"qiu.580@osu.edu","is_corresponding":false,"name":"Rui Qiu"},{"affiliations":"","email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"},{"affiliations":"","email":"mmmille6@wisc.edu","is_corresponding":false,"name":"Michelle Miller"},{"affiliations":"","email":"jinmeng.rao@wisc.edu","is_corresponding":false,"name":"Jinmeng Rao"},{"affiliations":"","email":"song.gao@wisc.edu","is_corresponding":false,"name":"Song Gao"},{"affiliations":"","email":"prhuber@ucdavis.edu","is_corresponding":false,"name":"Patrick R. Huber"},{"affiliations":"","email":"adhollander@ucdavis.edu","is_corresponding":false,"name":"Allan D. Hollander"},{"affiliations":"","email":"matthew@ic-foods.org","is_corresponding":false,"name":"Matthew Lange"},{"affiliations":"","email":"cgarcia@tacc.utexas.edu","is_corresponding":false,"name":"Christian R. Garcia"},{"affiliations":"","email":"jstubbs@tacc.utexas.edu","is_corresponding":false,"name":"Joe Stubbs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yamei Tu"],"doi":"10.1109/MCG.2023.3263960","external_paper_link":"","fno":"10091124","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Learning Environment, Interactive Learning Environments, Programming Language, Visual System, Analysis Pipeline, Patterns In Data, Flow Data, Human-computer Interaction, Food Systems, Information Retrieval, Domain Experts, Language Model, Automatic Generation, Interactive Exploration, Cyberinfrastructure, Pre-trained Language Models, Resource Description Framework, SPARQL Query, DBpedia, Entity Types, Data Visualization, Resilience Analysis, Load Data, Query Results, Supply Chain, Network Flow"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10091124","time_end":"","time_stamp":"","time_start":"","title":"An Interactive Knowledge and Learning Environment in Smart Foodsheds","uid":"v-cga-10091124","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Set visualization facilitates the exploration and analysis of set-type data. However, how sets should be visualized when the data are uncertain is still an open research challenge. To address the problem of depicting uncertainty in set visualization, we ask 1) which aspects of set type data can be affected by uncertainty and 2) which characteristics of uncertainty influence the visualization design. We answer these research questions by first describing a conceptual framework that brings together 1) the information that is primarily relevant in sets (i.e., set membership, set attributes, and element attributes) and 2) different plausible categories of (un)certainty (i.e., certainty, undefined uncertainty as a binary fact, and defined uncertainty as quantifiable measure). Following the structure of our framework, we systematically discuss basic visualization examples of integrating uncertainty in set visualizations. We draw on existing knowledge about general uncertainty visualization and previous evidence of its effectiveness.","accessible_pdf":false,"authors":[{"affiliations":"","email":"christian.tominski@uni-rostock.de","is_corresponding":false,"name":"Christian Tominski"},{"affiliations":"","email":"m.behrisch@uu.nl","is_corresponding":true,"name":"Michael Behrisch"},{"affiliations":"","email":"susanne.bleisch@fhnw.ch","is_corresponding":false,"name":"Susanne Bleisch"},{"affiliations":"","email":"sara.fabrikant@geo.uzh.ch","is_corresponding":false,"name":"Sara Irina Fabrikant"},{"affiliations":"","email":"eva.mayr@donau-uni.ac.at","is_corresponding":false,"name":"Eva Mayr"},{"affiliations":"","email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":"","email":"helen.purchase@monash.edu","is_corresponding":false,"name":"Helen Purchase"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Behrisch"],"doi":"10.1109/MCG.2023.3300441","external_paper_link":"","fno":"10198358","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Uncertainty, Data Visualization, Measurement Uncertainty, Visual Analytics, Terminology, Task Analysis, Surveys, Conceptual Framework, Cardinality, Data Visualization, Visual Representation, Measure Of The Amount, Set Membership, Intersection Set, Visual Design, Different Types Of Uncertainty, Missing Values, Visual Methods, Fuzzy Set, Age Of Students, Color Values, Uncertainty Values, Explicit Representation, Aggregate Value, Exact Information, Uncertain Information, Table Cells, Temporal Uncertainty, Uncertain Data, Representation Of Uncertainty, Implicit Representation, Spatial Uncertainty, Point Symbol, Visual Clutter, Color Hue, Graphical Elements, Uncertain Value"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10198358","time_end":"","time_stamp":"","time_start":"","title":"Visualizing Uncertainty in Sets","uid":"v-cga-10198358","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We report a study investigating the viability of using interactive visualizations to aid architectural design with building codes. While visualizations have been used to support general architectural design exploration, existing computational solutions treat building codes as separate from, rather than part of, the design process, creating challenges for architects. Through a series of participatory design studies with professional architects, we found that interactive visualizations have promising potential to aid design exploration and sensemaking in early stages of architectural design by providing feedback about potential allowances and consequences of design decisions. However, implementing a visualization system necessitates addressing the complexity and ambiguity inherent in building codes. To tackle these challenges, we propose various user-driven knowledge management mechanisms for integrating, negotiating, interpreting, and documenting building code rules.","accessible_pdf":false,"authors":[{"affiliations":"","email":"snowak@sfu.ca","is_corresponding":true,"name":"Stan Nowak"},{"affiliations":"","email":"bon.aseniero@autodesk.com","is_corresponding":false,"name":"Bon Adriel Aseniero"},{"affiliations":"","email":"lyn@sfu.ca","is_corresponding":false,"name":"Lyn Bartram"},{"affiliations":"","email":"tovi@dgp.toronto.edu","is_corresponding":false,"name":"Tovi Grossman"},{"affiliations":"","email":"George.fitzmaurice@autodesk.com","is_corresponding":false,"name":"George Fitzmaurice"},{"affiliations":"","email":"justin.matejka@autodesk.com","is_corresponding":false,"name":"Justin Matejka"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Stan Nowak"],"doi":"10.1109/MCG.2023.3307971","external_paper_link":"","fno":"10227838","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10227838","time_end":"","time_stamp":"","time_start":"","title":"Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes","uid":"v-cga-10227838","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Existing dynamic weighted graph visualization approaches rely on users\u2019 mental comparison to perceive temporal evolution of dynamic weighted graphs, hindering users from effectively analyzing changes across multiple timeslices. We propose DiffSeer, a novel approach for dynamic weighted graph visualization by explicitly visualizing the differences of graph structures (e.g., edge weight differences) between adjacent timeslices. Specifically, we present a novel nested matrix design that overviews the graph structure differences over a time period as well as shows graph structure details in the timeslices of user interest. By collectively considering the overall temporal evolution and structure details in each timeslice, an optimization-based node reordering strategy is developed to group nodes with similar evolution patterns and highlight interesting graph structure details in each timeslice. We conducted two case studies on real-world graph datasets and in-depth interviews with 12 target users to evaluate DiffSeer. The results demonstrate its effectiveness in visualizing dynamic weighted graphs.","accessible_pdf":false,"authors":[{"affiliations":"","email":"wenxiaolin@stu.scu.edu.cn","is_corresponding":false,"name":"Xiaolin Wen"},{"affiliations":"","email":"yongwang@smu.edu.sg","is_corresponding":true,"name":"Yong Wang"},{"affiliations":"","email":"wumeixuan@stu.scu.edu.cn","is_corresponding":false,"name":"Meixuan Wu"},{"affiliations":"","email":"wangfengjie@stu.scu.edu.cn","is_corresponding":false,"name":"Fengjie Wang"},{"affiliations":"","email":"xuanwu.yue@connect.ust.hk","is_corresponding":false,"name":"Xuanwu Yue"},{"affiliations":"","email":"shenqm@sustech.edu.cn","is_corresponding":false,"name":"Qiaomu Shen"},{"affiliations":"","email":"mayx@sustech.edu.cn","is_corresponding":false,"name":"Yuxin Ma"},{"affiliations":"","email":"zhumin@scu.edu.cn","is_corresponding":false,"name":"Min Zhu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yong Wang"],"doi":"10.1109/MCG.2023.3248289","external_paper_link":"","fno":"10078374","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visibility Graph, Spatial Patterns, Weight Change, In-depth Interviews, Temporal Changes, Temporal Evolution, Negative Changes, Interesting Patterns, Edge Weights, Real-world Datasets, Graph Structure, Visual Approach, Dynamic Visualization, Dynamic Graph, Financial Networks, Graph Datasets, Similar Evolutionary Patterns, User Interviews, Similar Changes, Chinese New Year, Sector Indices, Original Graph, Red Rectangle, Nodes In Order, Stock Market Crash, Stacked Bar Charts, Different Types Of Matrices, Chinese New, Blue Rectangle"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10078374","time_end":"","time_stamp":"","time_start":"","title":"DiffSeer: Difference-Based Dynamic Weighted Graph Visualization","uid":"v-cga-10078374","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Some 15 years ago, Visualization Viewpoints published an influential article titled Rainbow Color Map (Still) Considered Harmful (Borland and Taylor, 2007). The paper argued that the \u201crainbow colormap\u2019s characteristics of confusing the viewer, obscuring the data and actively misleading interpretation make it a poor choice for visualization.\u201d Subsequent articles often repeat and extend these arguments, so much so that avoiding rainbow colormaps, along with their derivatives, has become dogma in the visualization community. Despite this loud and persistent recommendation, scientists continue to use rainbow colormaps. Have we failed to communicate our message, or do rainbow colormaps offer advantages that have not been fully appreciated? We argue that rainbow colormaps have properties that are underappreciated by existing design conventions. We explore key critiques of the rainbow in the context of recent research to understand where and how rainbows might be misunderstood. Choosing a colormap is a complex task, and rainbow colormaps can be useful for selected applications.","accessible_pdf":false,"authors":[{"affiliations":"","email":"cware@ccom.unh.edu","is_corresponding":false,"name":"Colin Ware"},{"affiliations":"","email":"mstone@acm.org","is_corresponding":true,"name":"Maureen Stone"},{"affiliations":"","email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maureen Stone"],"doi":"10.1109/MCG.2023.3246111","external_paper_link":"","fno":"10128890","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Image Color Analysis, Semantics, Data Visualization, Estimation, Reliability Engineering"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10128890","time_end":"","time_stamp":"","time_start":"","title":"Rainbow Colormaps Are Not All Bad","uid":"v-cga-10128890","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The membership function is to categorize quantities along with a confidence degree. This article investigates a generic user interaction based on this function for categorizing various types of quantities without modification, which empowers users to articulate uncertainty categorization and enhance their visual data analysis significantly. We present the technique design and an online prototype, supplementing with insights from three case studies that highlight the technique\u2019s efficacy among different types of quantities. Furthermore, we conduct a formal user study to scrutinize the process and reasoning users employ while utilizing our technique. The findings indicate that our technique can help users create customized categories. Both our code and the interactive prototype are made available as open-source resources, intended for application across varied domains as a generic tool.","accessible_pdf":false,"authors":[{"affiliations":"","email":"liuliqun.cs@gmail.com","is_corresponding":true,"name":"Liqun Liu"},{"affiliations":"","email":"romain.vuillemot@ec-lyon.fr","is_corresponding":false,"name":"Romain Vuillemot"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Liqun Liu"],"doi":"10.1109/MCG.2023.3301449","external_paper_link":"","fno":"10207831","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Uncertainty, Prototypes, Fuzzy Logic, Image Color Analysis, Fuzzy Sets, Open Source Software, General Function, Membership Function, User Study, Classification Process, Fuzzy Logic, Quantitative Values, Visualization Techniques, Amount Of Type, Fuzzy Theory, General Interaction, Temperature Dataset, Interaction Techniques, Carbon Dioxide, Computation Time, Rule Based, Web Page, Real World Scenarios, Fuzzy Set, Domain Experts, Supercritical CO 2, Parallel Coordinates, Fuzzy System, Fuzzy Clustering, Interactive Visualization, Amount Of Items, Large Scale Problems"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10207831","time_end":"","time_stamp":"","time_start":"","title":"A Generic Interactive Membership Function for Categorization of Quantities","uid":"v-cga-10207831","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Although visualizations are a useful tool for helping people to understand information, they can also have unintended effects on human cognition. This is especially true for uncertain information, which is difficult for people to understand. Prior work has found that different methods of visualizing uncertain information can produce different patterns of decision making from users. However, uncertainty can also be represented via text or numerical information, and few studies have systematically compared these types of representations to visualizations of uncertainty. We present two experiments that compared visual representations of risk (icon arrays) to numerical representations (natural frequencies) in a wildfire evacuation task. Like prior studies, we found that different types of visual cues led to different patterns of decision making. In addition, our comparison of visual and numerical representations of risk found that people were more likely to evacuate when they saw visualizations than when they saw numerical representations. These experiments reinforce the idea that design choices are not neutral: seemingly minor differences in how information is represented can have important impacts on human risk perception and decision making.","accessible_pdf":false,"authors":[{"affiliations":"","email":"lematze@sandia.gov","is_corresponding":true,"name":"Laura E. Matzen"},{"affiliations":"","email":"bchowel@sandia.gov","is_corresponding":false,"name":"Breannan C. Howell"},{"affiliations":"","email":"mctrumb@sandia.gov","is_corresponding":false,"name":"Michael C. S. Trumbo"},{"affiliations":"","email":"kmdivis@sandia.gov","is_corresponding":false,"name":"Kristin M. Divis"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Laura E. Matzen"],"doi":"10.1109/MCG.2023.3299875","external_paper_link":"","fno":"10201383","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, Uncertainty, Decision Making, Costs, Task Analysis, Laboratories, Information Analysis, Decision Making, Visual Representation, Numerical Representation, Decision Patterns, Deterministic, Risk Perception, Specific Information, Fundamental Frequency, Point Values, Representation Of Information, Risk Information, Visual Conditions, Numerous Conditions, Human Decision, Numerical Information, Impact Of Different Types, Uncertain Information, Type Of Visualization, Differences In Risk Perception, Representation Of Uncertainty, Increase In Participation, Participants In Experiment, Individual Difference Measures, Sandia National Laboratories, Risk Propensity, Bonus Payments, Average Response Time, Difference In Probability, Response Time"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10201383","time_end":"","time_stamp":"","time_start":"","title":"Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making","uid":"v-cga-10201383","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Traditional approaches to data visualization have often focused on comparing different subsets of data, and this is reflected in the many techniques developed and evaluated over the years for visual comparison. Similarly, common workflows for exploratory visualization are built upon the idea of users interactively applying various filter and grouping mechanisms in search of new insights. This paradigm has proven effective at helping users identify correlations between variables that can inform thinking and decision-making. However, recent studies show that consumers of visualizations often draw causal conclusions even when not supported by the data. Motivated by these observations, this article highlights recent advances from a growing community of researchers exploring methods that aim to directly support visual causal inference. However, many of these approaches have their own limitations, which limit their use in many real-world scenarios. This article, therefore, also outlines a set of key open challenges and corresponding priorities for new research to advance the state of the art in visual causal inference.","accessible_pdf":false,"authors":[{"affiliations":"","email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":"","email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":"","email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arran Zeyu Wan"],"doi":"10.1109/MCG.2023.3338788","external_paper_link":"","fno":"10414267","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Analytical Models, Correlation, Visual Analytics, Decision Making, Data Visualization, Reliability Theory, Cognition, Inference Algorithms, Causal Inference, Causality, Social Media, Exploratory Analysis, Data Visualization, Visual Representation, Visual Analysis, Visualization Tool, Open Challenges, Interactive Visualization, Assembly Line, Different Subsets Of Data, Visual Analytics Tool, Data Driven Decision Making, Data Quality, Statistical Models, Causal Effect, Visual System, Use Of Social Media, Bar Charts, Causal Model, Causal Graph, Chart Types, Directed Acyclic Graph, Visual Design, Portion Of The Dataset, Causal Structure, Prior Section, Causal Explanations, Line Graph"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10414267","time_end":"","time_stamp":"","time_start":"","title":"Using Counterfactuals to Improve Causal Inferences From Visualizations","uid":"v-cga-10414267","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Recent developments in artificial intelligence (AI) and machine learning (ML) have led to the creation of powerful generative AI methods and tools capable of producing text, code, images, and other media in response to user prompts. Significant interest in the technology has led to speculation about what fields, including visualization, can be augmented or replaced by such approaches. However, there remains a lack of understanding about which visualization activities may be particularly suitable for the application of generative AI. Drawing on examples from the field, we map current and emerging capabilities of generative AI across the different phases of the visualization lifecycle and describe salient opportunities and challenges.","accessible_pdf":false,"authors":[{"affiliations":"","email":"rahul.basole@accenture.com","is_corresponding":false,"name":"Rahul C. Basole"},{"affiliations":"","email":"timothy.major@accenture.com","is_corresponding":true,"name":"Timothy Major"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Timothy Major"],"doi":"10.1109/MCG.2024.3362168","external_paper_link":"","fno":"10478355","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Generative AI, Art, Artificial Intelligence, Machine Learning, Visualization, Media, Augmented Reality, Machine Learning, Visual Representation, Professional Knowledge, Creative Process, Domain Experts, Generalization Capability, Development Of Artificial Intelligence, Artificial Intelligence Capabilities, Iterative Process, Natural Language, Commercial Software, Hallucinations, Team Sports, Design Requirements, Intelligence Agencies, Recommender Systems, User Requirements, Iterative Design, Use Of Artificial Intelligence, Visual Design, Phase Assemblage, Data Literacy"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"cga0","slot_id":"v-cga-10478355","time_end":"","time_stamp":"","time_start":"","title":"Generative AI for Visualization: Opportunities and Challenges","uid":"v-cga-10478355","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"CG&A","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"v-full":{"event":"VIS Full Papers","event_description":"","event_prefix":"v-full","event_type":"full","event_url":"","long_name":"VIS Full Papers","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"v-full","ff_link":"","session_id":"full0","session_image":"full0.png","time_end":"","time_slots":[{"abstract":"We present a visual analytics approach for multi-level visual exploration of users\u2019 interaction strategies in an interactive digital environment. The use of interactive touchscreen exhibits in informal learning environments, such as museums and science centers, often incorporate frameworks that classify learning processes, such as Bloom\u2019s taxonomy, to achieve better user engagement and knowledge transfer. To analyze user behavior within these digital environments, interaction logs are recorded to capture diverse exploration strategies. However, analysis of such logs is challenging, especially in terms of coupling interactions and cognitive learning processes, and existing work within learning and educational contexts remains limited. To address these gaps, we develop a visual analytics approach for analyzing interaction logs that supports exploration at the individual user level and multi-user comparison. The approach utilizes algorithmic methods to identify similarities in users' interactions and reveal their exploration strategies. We motivate and illustrate our approach through an application scenario, using event sequences derived from interaction log data in an experimental study conducted with science center visitors from diverse backgrounds and demographics. The study involves 14 users completing tasks of increasing complexity, designed to stimulate different levels of cognitive learning processes. We implement our approach in an interactive visual analytics prototype system, named VISID, and together with domain experts, discover a set of task-solving exploration strategies, such as \"cascading\" and \"nested-loop\", which reflect different levels of learning processes from Bloom's taxonomy. Finally, we discuss the generalizability and scalability of the presented system and the need for further research with data acquired in the wild.","accessible_pdf":false,"authors":[{"affiliations":["Media and Information Technology, Norrk\u00f6ping, Sweden"],"email":"peilin.yu@liu.se","is_corresponding":true,"name":"Peilin Yu"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"aida.vitoria@liu.se","is_corresponding":false,"name":"Aida Nordman"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"marta.koc-januchta@liu.se","is_corresponding":false,"name":"Marta M. Koc-Januchta"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":false,"name":"Lonni Besan\u00e7on"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"katerina.vrotsou@liu.se","is_corresponding":false,"name":"Katerina Vrotsou"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Peilin Yu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1026","time_end":"","time_stamp":"","time_start":"","title":"Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment","uid":"v-full-1026","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In soccer, player scouting aims to find players suitable for a team to increase the winning chance in future matches. To scout suitable players, coaches and analysts need to consider various complicated factors, such as the players' performance in the tactics of a new team, which is hard to learn directly from their historical performance. Match simulation methods have been introduced to scout players by estimating their expected contributions to a new team. However, they usually focus on the simulation of match results and hardly support interactive analysis to navigate potential target players and compare them in fine-grained simulated behaviors. In this work, we propose a visual analytics method to assist soccer player scouting based on match simulation. We construct a two-level match simulation framework for estimating both match results and player behaviors when a player comes to a new team. Based on the framework, we develop a visual analytics system, Team-Scouter, to facilitate the simulative-based soccer player scouting process through player navigation, comparison, and explanation. With our system, coaches and analysts can find potential players suitable for the team and compare them on historical and expected performances. To explain the players' expected performances, the system provides a visual comparison between the simulated behaviors of the player and the actual ones. The usefulness and effectiveness of the system are demonstrated by two case studies on a real-world dataset and an expert interview.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"caoanqi28@163.com","is_corresponding":true,"name":"Anqi Cao"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"xxie@zju.edu.cn","is_corresponding":false,"name":"Xiao Xie"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"2366385033@qq.com","is_corresponding":false,"name":"Runjin Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"1282533692@qq.com","is_corresponding":false,"name":"Yuxin Tian"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"fanmu_032@zju.edu.cn","is_corresponding":false,"name":"Mu Fan"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhang_hui@zju.edu.cn","is_corresponding":false,"name":"Hui Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anqi Cao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1031","time_end":"","time_stamp":"","time_start":"","title":"Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting","uid":"v-full-1031","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Dynamic topic modeling is useful at discovering the development and change in latent topics over time. However, present methodology relies on algorithms that separate document and word representations. This prevents the creation of a meaningful embedding space where changes in word usage and documents can be directly analyzed in a temporal context. This paper proposes an expansion of the compass-aligned temporal Word2Vec methodology into dynamic topic modeling. Such a method allows for the direct comparison of word and document embeddings across time in dynamic topics. This enables the creation of visualizations that incorporate diachronic word embeddings within the context of documents into topic visualizations. In experiments against the current state-of-the-art, our proposed method demonstrates overall competitive performance in topic relevancy and diversity across temporal datasets of varying size. Simultaneously, it provides insightful visualizations focused on temporal word embeddings while maintaining the insights provided by global topic evolution, advancing our understanding of how topics evolve over time.","accessible_pdf":false,"authors":[{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"d4n1elp@vt.edu","is_corresponding":true,"name":"Daniel Palamarchuk"},{"affiliations":["Virginia Polytechnic Institute of Technology , Blacksburg, United States"],"email":"lemaraw@vt.edu","is_corresponding":false,"name":"Lemara Williams"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"bmayer@cs.vt.edu","is_corresponding":false,"name":"Brian Mayer"},{"affiliations":["Savannah River National Laboratory, Aiken, United States"],"email":"thomas.danielson@srnl.doe.gov","is_corresponding":false,"name":"Thomas Danielson"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":false,"name":"Rebecca Faust"},{"affiliations":["Savannah River National Laboratory, Aiken, United States"],"email":"larry.deschaine@srnl.doe.gov","is_corresponding":false,"name":"Larry M Deschaine PhD"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Palamarchuk"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1032","time_end":"","time_stamp":"","time_start":"","title":"Visualizing Temporal Topic Embeddings with a Compass","uid":"v-full-1032","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Propagation analysis refers to studying how information spreads on social media, a pivotal endeavor for understanding social sentiment and public opinions. Numerous studies contribute to visualizing information spread, but few have considered the implicit and complex diffusion patterns among multiple platforms. To bridge the gap, we collaborated with professionals to discover crucial factors that dissect the mechanism of cross-platform information spread. Based on that, we propose an information diffusion model that estimates the likelihood of a topic/post spreading among different social media platforms. Moreover, we propose a novel visual metaphor that encapsulates cross-platform patterns in a manner analogous to the spread of seeds across gardens. Specifically, we visualize social platforms, posts, implicit cross-platform routes, and salient instances as elements of a virtual ecosystem \u2014 gardens, flowers, winds, and seeds, respectively. We further develop a visual analytic system, namely BloomWind, that enables users to quickly identify the cross-platform diffusion patterns and investigate the relevant social media posts. Ultimately, we demonstrate the usage of BloomWind through two case studies and validate its effectiveness using expert interviews.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"940662579@qq.com","is_corresponding":true,"name":"Jianing Yin"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"hzjia@zju.edu.cn","is_corresponding":false,"name":"Hanze Jia"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhoubuwei@zju.edu.cn","is_corresponding":false,"name":"Buwei Zhou"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"tangtan@zju.edu.cn","is_corresponding":false,"name":"Tan Tang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"yingluu@zju.edu.cn","is_corresponding":false,"name":"Lu Ying"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"sn_ye@zju.edu.cn","is_corresponding":false,"name":"Shuainan Ye"},{"affiliations":["Michigan State University, East Lansing, United States"],"email":"pengtaiq@msu.edu","is_corresponding":false,"name":"Tai-Quan Peng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jianing Yin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1039","time_end":"","time_stamp":"","time_start":"","title":"Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts","uid":"v-full-1039","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"When treating Head and Neck cancer patients, oncologists have to navigate a complicated series of treatment decisions for each patient. The relationship between each treatment decision and the potential tradeoff of tumor control and toxicity risk is poorly understood, leaving oncologists to largely rely on institutional knowledge and general guidelines that do not take into account specific patient circumstances. Evaluating these risks relies on a complicated understanding of several different factors such as patient health, spatial tumor spread and treatment side effect risk that can not be captured through simple heuristics. To support clinicians in better understanding tradeoffs when deciding on treatment courses, we developed DITTO, a digital-twin and visual computing system that allows clinicians to analyze nuanced patient risk for each patient and decide on an optimal treatment plan. DITTO relies on a sequential Deep Reinforcement Learning (DRL) system to deliver personalized risk of both long-term and short-term disease outcome and toxicity risk for HNC patients. Based on a participatory collaborative design alongside oncologists, we also implement several explainability methods to support clinical trust and encourage healthy skepticism when using our models. We evaluate the efficacy of our model through quantitative evaluation of model performance and case studies with qualitative feedback. Finally, we discuss design lessons for developing clinical visual XAI applications for clinical end users.","accessible_pdf":false,"authors":[{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"awentze2@uic.edu","is_corresponding":true,"name":"Andrew Wentzel"},{"affiliations":["University of Houston, Houston, United States"],"email":"skattia@mdanderson.org","is_corresponding":false,"name":"Serageldin Attia"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"zhangz@uic.edu","is_corresponding":false,"name":"Xinhua Zhang"},{"affiliations":["University of Iowa, Iowa City, United States"],"email":"guadalupe-canahuate@uiowa.edu","is_corresponding":false,"name":"Guadalupe Canahuate"},{"affiliations":["University of Texas, Houston, United States"],"email":"cdfuller@mdanderson.org","is_corresponding":false,"name":"Clifton David Fuller"},{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"g.elisabeta.marai@gmail.com","is_corresponding":false,"name":"G. Elisabeta Marai"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Andrew Wentzel"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1059","time_end":"","time_stamp":"","time_start":"","title":"DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer","uid":"v-full-1059","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"There is increased interest in understanding the interplay between text and visuals in the field of data visualization. However, this attention has predominantly been on the use of text in standalone visualizations (such as text annotation overlays) or augmenting text stories supported by a series of independent views. In this paper, we shift from the traditional focus on single-chart annotations to characterize the nuanced but crucial communication role of text in the complex environment of interactive dashboards. Through a survey and analysis of 190 dashboards in the wild, plus 13 expert interview sessions with experienced dashboard authors, we highlight the distinctive nature of text as an integral component of the dashboard experience, while delving into the categories, semantic levels, and functional roles of text, and exploring how these text elements are coalesced by dashboard authors to guide and inform dashboard users. Our contributions are threefold. First, we distill qualitative and quantitative findings from our studies to characterize current practices of text use in dashboards, including a categorization of text-based components and design patterns. Second, we leverage current practices and existing literature to propose, discuss, and validate recommended practices for text in dashboards, embodied as a set of 12 heuristics that underscore the semantic and functional role of text in offering navigational cues, contextualizing data insights, supporting reading order, among other concerns. Third, we reflect on our findings plus existing literature to identify gaps and propose opportunities for data visualization researchers to push the boundaries on text usage for dashboards, from authoring support and interactivity to text generation and content personalization. Our research underscores the significance of elevating text as a first-class citizen in data visualization, and the need to support the inclusion of textual components and their interactive affordances in dashboard design.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"nicole.sultanum@gmail.com","is_corresponding":true,"name":"Nicole Sultanum"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicole Sultanum"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1060","time_end":"","time_stamp":"","time_start":"","title":"From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards","uid":"v-full-1060","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"While previous work has found success in deploying visualizations as museum exhibits, it has not investigated whether museum context impacts visitor behaviour with these exhibits. We present an interactive Deep-time Literacy Visualization Exhibit (DeLVE) to help museum visitors understand deep time (lengths of extremely long geological processes) by improving proportional reasoning skills through comparison of different time periods. DeLVE uses a new visualization idiom, Connected Multi-Tier Ranges, to visualize curated datasets of past events across multiple scales of time, relating extreme scales with concrete scales that have more familiar magnitudes and units. Museum staff at three separate museums approved the deployment of DeLVE as a digital kiosk, and devoted time to curating a unique dataset in each of them. We collect data from two sources, an observational study and system trace logs. We discuss the importance of context: similar museum exhibits in different contexts were received very differently by visitors. We additionally discuss differences in our process from Sedlmair et al.'s design study methodology which is focused on design studies triggered by connection with collaborators rather than the discovery of a concept to communicate. Supplemental materials are available at: https://osf.io/z53dq/?view_only=4df33aad207144aca149982412125541","accessible_pdf":false,"authors":[{"affiliations":["The University of British Columbia, Vancouver, Canada"],"email":"marasolen@gmail.com","is_corresponding":true,"name":"Mara Solen"},{"affiliations":["University of British Columbia , Vancouver, Canada"],"email":"sultananigar70@gmail.com","is_corresponding":false,"name":"Nigar Sultana"},{"affiliations":["University of British Columbia, Vancouver, Canada"],"email":"laura.lukes@ubc.ca","is_corresponding":false,"name":"Laura A. Lukes"},{"affiliations":["University of British Columbia, Vancouver, Canada"],"email":"tmm@cs.ubc.ca","is_corresponding":false,"name":"Tamara Munzner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mara Solen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1063","time_end":"","time_stamp":"","time_start":"","title":"DeLVE into Earth\u2019s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts","uid":"v-full-1063","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Large Language Models (LLMs), such as ChatGPT and Llama, have revolutionized various domains through their impressive natural language processing capabilities. However, their deployment raises significant ethical and security concerns, including their potential misuse for generating fake news or aiding illegal activities. Thus, ensuring the development of secure and trustworthy LLMs is crucial. Traditional red teaming approaches for identifying vulnerabilities in AI models are limited by their reliance on manual prompt construction and expertise. This paper introduces a novel visual analytics system, AdversaFlow, designed to enhance the security of LLMs against adversarial attacks through human-AI collaboration. Our system, which involves adversarial training between a target model and a red model, is equipped with a unique multi-level adversarial flow visualization and a fluctuation path visualization technique. These features provide a detailed insight into the adversarial dynamics and the robustness of LLMs, thereby enabling AI security experts to identify and mitigate vulnerabilities effectively. We deliver quantitative evaluations for the models and present case studies that validate the utility of our system and share insights for future AI security solutions. Our contributions include a human-AI collaboration framework for LLM red teaming, a comprehensive visual analytics system to support adversarial pattern presentation and fluctuation analysis, and valuable lessons learned in visual analytics for AI security.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Ningbo, China"],"email":"dengdazhen@outlook.com","is_corresponding":true,"name":"Dazhen Deng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhangchuhan024@163.com","is_corresponding":false,"name":"Chuhan Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"huawzheng@gmail.com","is_corresponding":false,"name":"Huawei Zheng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"yw.pu@zju.edu.cn","is_corresponding":false,"name":"Yuwen Pu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"sji@zju.edu.cn","is_corresponding":false,"name":"Shouling Ji"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dazhen Deng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1067","time_end":"","time_stamp":"","time_start":"","title":"AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow","uid":"v-full-1067","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A growing body of work draws on feminist thinking to challenge assumptions about how people engage with and use visualizations. This work draws on feminist values, driving design and research guidelines that account for the influences of power and neglect. This prior work is largely prescriptive, however, forgoing articulation of how feminist theories of knowledge \u2014 or feminist epistemology \u2014 can alter research design and outcomes. At the core of our work is an engagement with feminist epistemology, drawing attention to how a new framework for how we know what we know enabled us to overcome intellectual tensions in our research. Specifically, we focus on the theoretical concept of entanglement, central to recent feminist scholarship, and contribute: a history of entanglement in the broader scope of feminist theory; an articulation of the main points of entanglement theory for a visualization context; and a case study of research outcomes as evidence of the potential of feminist epistemology to impact visualization research. This work answers a call in the community to embrace a broader set of theoretical and epistemic foundations and provides a starting point for bringing different theories into visualization research.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"derya.akbaba@liu.se","is_corresponding":true,"name":"Derya Akbaba"},{"affiliations":["Emory University, Atlanta, United States"],"email":"lauren.klein@emory.edu","is_corresponding":false,"name":"Lauren Klein"},{"affiliations":["Link\u00f6ping University, N\u00f6rrkoping, Sweden"],"email":"miriah.meyer@liu.se","is_corresponding":false,"name":"Miriah Meyer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Derya Akbaba"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1077","time_end":"","time_stamp":"","time_start":"","title":"Entanglements for Visualization: Changing Research Outcomes through Feminist Theory","uid":"v-full-1077","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Large Language Models (LLMs) have shown great potential in intelligent visualization systems, especially for domain-specific applications. Integrating LLMs into visualization systems presents challenges, and we categorize these challenges into three alignments: domain problems with LLMs, visualization with LLMs, and interaction with LLMs. To achieve these alignments, we propose a framework and outline a workflow to guide the application of fine-tuned LLMs to enhance visual interactions for domain-specific tasks. These alignment challenges are critical in education as they call for an intelligent visualization system to support beginners' self-regulated learning. Therefore, we apply the framework to education and introduce Tailor-Mind, an interactive visualization system designed to facilitate self-regulated learning for artificial intelligence beginners. Drawing on insights from a preliminary study, we identify self-regulated learning tasks and fine-tuning objectives to guide visualization design and tuning data construction. Our focus on aligning visualization with fine-tuned LLM makes Tailor-Mind more like a personalized tutor. Tailor-Mind also supports interactive recommendations to help beginners better achieve their learning goals. Model performance evaluations and user studies confirm that Tailor-Mind improves the self-regulated learning experience, effectively validating the proposed framework.","accessible_pdf":false,"authors":[{"affiliations":["Fudan University, Shanghai, China"],"email":"lgao.lynne@gmail.com","is_corresponding":true,"name":"Lin Gao"},{"affiliations":["Fudan University, ShangHai, China"],"email":"kingluther6666@gmail.com","is_corresponding":false,"name":"Jing Lu"},{"affiliations":["Fudan University, Shanghai, China"],"email":"gemini25szk@gmail.com","is_corresponding":false,"name":"Zekai Shao"},{"affiliations":["Fudan University, Shanghai, China"],"email":"ziyuelin917@gmail.com","is_corresponding":false,"name":"Ziyue Lin"},{"affiliations":["Fudan unversity, ShangHai, China"],"email":"sbyue23@m.fudan.edu.cn","is_corresponding":false,"name":"Shengbin Yue"},{"affiliations":["Fudan University, Shanghai, China"],"email":"chiokit0819@gmail.com","is_corresponding":false,"name":"Chiokit Ieong"},{"affiliations":["Fudan University, Shanghai, China"],"email":"21307130094@m.fudan.edu.cn","is_corresponding":false,"name":"Yi Sun"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"rory.james.zauner@univie.ac.at","is_corresponding":false,"name":"Rory Zauner"},{"affiliations":["Fudan University, Shanghai, China"],"email":"zywei@fudan.edu.cn","is_corresponding":false,"name":"Zhongyu Wei"},{"affiliations":["Fudan University, Shanghai, China"],"email":"simingchen3@gmail.com","is_corresponding":false,"name":"Siming Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lin Gao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1096","time_end":"","time_stamp":"","time_start":"","title":"Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education","uid":"v-full-1096","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Tactics play an important role in team sports by guiding how players interact on the field. Both sports fans and experts have a demand for analyzing sports tactics. Existing approaches allow users to visually perceive the multivariate tactical effects. However, these approaches usually consider each tactic as a whole, making it difficult for users to connect the complex interactions inside each tactic to the final tactical effect. In this work, we collaborate with basketball experts and propose a progressive approach to help users gain a deeper understanding of how each tactic works and customize tactics on demand. Users can progressively sketch on a tactic board, and a coach agent will simulate the possible actions in each step and present the simulation to users with facet visualizations. We develop an extensible framework that integrates large language models (LLMs) and visualizations to help users communicate with the coach agent with multimodal inputs. Based on the framework, we design and develop Smartboard, an agent-based interactive visualization system for fine-grained tactical analysis. Smartboard provides users with a structured process of setup, simulation, and evolution, allowing for iterative exploration of tactics based on specific personalized scenarios. We conduct case studies based on real-world basketball datasets to demonstrate the usefulness of our system.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ziao_liu@outlook.com","is_corresponding":true,"name":"Ziao Liu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"xxie@zju.edu.cn","is_corresponding":false,"name":"Xiao Xie"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"3170101799@zju.edu.cn","is_corresponding":false,"name":"Moqi He"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhao_ws@zju.edu.cn","is_corresponding":false,"name":"Wenshuo Zhao"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"wuyihong0606@gmail.com","is_corresponding":false,"name":"Yihong Wu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"lycheecheng@zju.edu.cn","is_corresponding":false,"name":"Liqi Cheng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"zhang_hui@zju.edu.cn","is_corresponding":false,"name":"Hui Zhang"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ziao Liu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1099","time_end":"","time_stamp":"","time_start":"","title":"Smartboard: Visual Exploration of Team Tactics with LLM Agent","uid":"v-full-1099","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"\u201cCorrelation does not imply causation\u201d is a famous mantra in statistical and visual analysis. However, consumers of visualizations often draw causal conclusions when only correlations between variables are shown. In this paper, we investigate factors that contribute to causal relationships users perceive in visualizations. We collected a corpus of concept pairs from variables in widely used datasets and created visualizations that depict varying correlative associations using three typical statistical chart types. We conducted two MTurk studies on (1) preconceived notions on causal relations without charts, and (2) perceived causal relations with charts, for each concept pair. Our results indicate that people make assumptions about causal relationships between pairs of concepts even without seeing any visualized data. Moreover, our results suggest that these assumptions constitute causal priors that, in combination with chart type and visualized association, impact how data visualizations are interpreted. The results also suggest that causal priors may lead to over- or under-estimation in perceived causal relations in different circumstances, and that those priors can also impact users\u2019 confidence in their causal assessments. Using data from the studies, we develop a model to capture the interaction between causal priors and visualized associations as they combine to impact a user\u2019s perceived causal relations. In addition to reporting the study results and analyses, we provide an open dataset of causal priors for 56 specific concept pairs that can serve as a potential benchmark for future studies. We also suggest heuristic-based guidelines to help designers improve visualization design choices to better support visual causal inference.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":true,"name":"Arran Zeyu Wang"},{"affiliations":["UNC-Chapel Hill, Chapel Hill, United States"],"email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":["Davidson College, Davidson, United States"],"email":"tapeck@davidson.edu","is_corresponding":false,"name":"Tabitha C. Peck"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"vaapad@live.unc.edu","is_corresponding":false,"name":"Wenyuan Wang"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arran Zeyu Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1100","time_end":"","time_stamp":"","time_start":"","title":"Causal Priors and Their Influence on Judgements of Causality in Visualized Data","uid":"v-full-1100","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.","accessible_pdf":false,"authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jykim@hcil.snu.ac.kr","is_corresponding":true,"name":"Jaeyoung Kim"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"sihyeon@hcil.snu.ac.kr","is_corresponding":false,"name":"Sihyeon Lee"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"hj@hcil.snu.ac.kr","is_corresponding":false,"name":"Hyeon Jeon"},{"affiliations":["Korea University Guro Hospital, Seoul, Korea, Republic of"],"email":"gooday19@gmail.com","is_corresponding":false,"name":"Keon-Joo Lee"},{"affiliations":["Hankuk University of Foreign Studies, Yongin-si, Korea, Republic of"],"email":"bkim@hufs.ac.kr","is_corresponding":false,"name":"Bohyoung Kim"},{"affiliations":["Seoul National University Bundang Hospital, Seongnam, Korea, Republic of"],"email":"braindoc@snu.ac.kr","is_corresponding":false,"name":"HEE JOON"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jaeyoung Kim"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1121","time_end":"","time_stamp":"","time_start":"","title":"PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets","uid":"v-full-1121","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Citations allow quickly identifying related research. If multiple publications are selected as seeds, specific suggestions for related literature can be made based on the number of incoming and outgoing citation links to this selection. Interactively adding recommended publications to the selection refines the next suggestion and incrementally builds a relevant collection of publications. Following this approach, the paper presents a search and foraging approach, PUREsuggest, which combines citation-based suggestions with augmented visualizations of the citation network. The focus and novelty of the approach is, first, the transparency of how the rankings are explained visually and, second, that the process can be steered through user-defined keywords, which reflect topics of interests. The system can be used to build new literature collections, to update and assess existing ones, as well as to use the collected literature for identifying relevant experts in the field. We evaluated the recommendation approach through simulated sessions and performed a user study investigating search strategies and usage patterns supported by the interface.","accessible_pdf":false,"authors":[{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"fabian.beck@uni-bamberg.de","is_corresponding":true,"name":"Fabian Beck"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fabian Beck"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1128","time_end":"","time_stamp":"","time_start":"","title":"PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings","uid":"v-full-1128","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Inspired by recent advances in digital fabrication, artists and scientists have demonstrated that physical data encodings (i.e., data physicalizations) can increase engagement with data, foster collaboration, and in some cases, improve data legibility and analysis relative to digital alternatives. However, prior empirical studies have only investigated abstract data encoded in physical form (e.g., laser cut bar charts) and not continuously sampled spatial data fields relevant to climate and medical science (e.g., heights, temperatures, densities, and velocities sampled on a spatial grid). This paper presents the design and results of the first study to characterize human performance in 3D spatial data analysis tasks across analogous physical and digital visualizations. Participants analyzed continuous spatial elevation data with three visualization modalities: (1) 2D digital visualization; (2) perspective-tracked, stereoscopic \"fishtank\" virtual reality; and (3) 3D printed data physicalization. Their tasks included tracing paths downhill, looking up spatial locations and comparing their relative heights, and identifying and reporting the minimum and maximum heights within certain spatial regions. As hypothesized, in most cases, participants performed the tasks just as well or better in the physical modality (based on time and error metrics). Additional results include an analysis of open-ended feedback from participants and discussion of implications for further research on the value of data physicalization. All data and supplemental materials are available at https://osf.io/7xdq4/?view_only=7416f8cfca85473889456fb69527abbc","accessible_pdf":false,"authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"bridger.g.herman@gmail.com","is_corresponding":true,"name":"Bridger Herman"},{"affiliations":["Beth Israel Deaconess Medical Center, Boston, United States"],"email":"cdjackso@bidmc.harvard.edu","is_corresponding":false,"name":"Cullen D. Jackson"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"dfk@umn.edu","is_corresponding":false,"name":"Daniel F. Keefe"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Bridger Herman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1137","time_end":"","time_stamp":"","time_start":"","title":"Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks","uid":"v-full-1137","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Written language is a useful mode for non-visual creative activities like writing essays and planning searches. This paper investigates the integration of written language into the visualization design process. We call this idea a `written rudder,' , since it acts as a guiding force or strategy for the design. Via an interview study of 24 working visualization designers, we first established that only a minority of participants systematically use written rudders to aid in design. A second study with 15 visualization designers examined four different variants of rudders: asking questions, stating conclusions, composing a narrative, and writing titles. Overall, participants had a positive reaction; designers recognized the benefits of explicitly writing down components of the design and indicated that they would use this approach in future design work. More specifically, two approaches \u2013- writing questions and writing conclusions/takeaways \u2013- were seen as beneficial across the design process, while writing narratives showed promise mainly for the creation stage. Although concerns around potential bias during data exploration were raised, participants also discussed strategies to mitigate such concerns. This paper contributes to a deeper understanding of the interplay between language and visualization, and proposes a straightforward, lightweight addition to the visualization design process.","accessible_pdf":false,"authors":[{"affiliations":["UC Berkeley, Berkeley, United States"],"email":"chase_stokes@berkeley.edu","is_corresponding":true,"name":"Chase Stokes"},{"affiliations":["Self, Berkeley, United States"],"email":"clarahu@berkeley.edu","is_corresponding":false,"name":"Clara Hu"},{"affiliations":["UC Berkeley, Berkeley, United States"],"email":"hearst@berkeley.edu","is_corresponding":false,"name":"Marti Hearst"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chase Stokes"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1140","time_end":"","time_stamp":"","time_start":"","title":"It's a Good Idea to Put It Into Words: Writing 'Rudders' in the Initial Stages of Visualization Design","uid":"v-full-1140","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"To deploy machine learning (ML) models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many compression experiments, identifying subtle changes in model behavior, and negotiating complex accuracy-efficiency trade-offs. However, existing compression tools poorly support comparison, leading to tedious and, sometimes, incomplete analyses spread across disjoint tools. To support real-world comparative workflows, we develop an interactive visual system called Compress & Compare. Within a single interface, Compress & Compare surfaces promising compression strategies by visualizing provenance relationships between compressed models and reveals compression-induced behavior changes by comparing models' predictions, weights, and activations. We demonstrate how Compress & Compare supports common compression analysis tasks through two case studies\u2014debugging failed compression on generative language models and identifying compression-induced biases in image classification. We further evaluate Compress & Compare in a user study with eight compression experts, illustrating its potential to provide structure to compression workflows, help practitioners build intuition about compression, and encourage thorough analysis of compression\u2019s effect on model behavior. Through these evaluations, we identify compression-specific challenges that future visual analytics tools should consider and Compress & Compare visualizations that may generalize to broader model comparison tasks.","accessible_pdf":false,"authors":[{"affiliations":["Massachusetts Institute of Technology, Cambridge, United States"],"email":"aboggust@mit.edu","is_corresponding":true,"name":"Angie Boggust"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"vsivaram@andrew.cmu.edu","is_corresponding":false,"name":"Venkatesh Sivaraman"},{"affiliations":["Apple, Cambridge, United States"],"email":"yassogba@gmail.com","is_corresponding":false,"name":"Yannick Assogba"},{"affiliations":["Apple, Seattle, United States"],"email":"donghao@apple.com","is_corresponding":false,"name":"Donghao Ren"},{"affiliations":["Apple, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["Apple, Seattle, United States"],"email":"fred.hohman@gmail.com","is_corresponding":false,"name":"Fred Hohman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Angie Boggust"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1142","time_end":"","time_stamp":"","time_start":"","title":"Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments","uid":"v-full-1142","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Large Language Models (LLMs) like GPT-4 which support multimodal input (i.e., prompts containing images in addition to text) have immense potential to advance visualization research. However, many questions exist about the visual capabilities of such models, including how well they can read and interpret visually represented data. In our work, we address this question by evaluating the GPT-4 multimodal LLM using a suite of task sets meant to assess the model\u2019s visualization literacy. The task sets are based on existing work in the visualization community addressing both automated chart question answering and human visualization literacy across multiple settings. Our assessment finds that GPT-4 can perform tasks such as recognizing trends and extreme values, and also demonstrates some understanding of visualization design best-practices. By contrast, GPT-4 struggles with simple value retrieval when not provided with the original dataset, lacks the ability to reliably distinguish between colors in charts, and occasionally suffers from hallucination and inconsistency. We conclude by reflecting on the model\u2019s strengths and weaknesses as well as the potential utility of models like GPT-4 for future visualization research. We also release all code, stimuli, and results for the task sets at the following link: (REDACTED FOR REVIEW)","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"abendeck3@gatech.edu","is_corresponding":true,"name":"Alexander Bendeck"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"john.stasko@cc.gatech.edu","is_corresponding":false,"name":"John Stasko"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alexander Bendeck"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1147","time_end":"","time_stamp":"","time_start":"","title":"An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks","uid":"v-full-1147","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Composite visualization represents a widely embraced design that combines multiple visual representations to create an integrated view. However, the traditional approach of creating composite visualizations in immersive environments typically occurs asynchronously outside of the immersive space and is carried out by experienced experts. In this work, we take the first step to empower users to participate in the creation of composite visualization within immersive environments through embodied interactions. This could provide a flexible and fluid experience for data exploration and facilitate a deep understanding of the relationship between data visualizations. We begin with forming a design space of embodied interactions to create various types of composite visualizations with the consideration of data relationships. Drawing inspiration from people's natural experience of manipulating physical objects, we design interactions to directly assemble composite visualizations in immersive environments. Building upon the design space, we present a series of case studies showcasing the interactive method to create different kinds of composite visualizations in Virtual Reality (VR). Subsequently, we conduct a user study to evaluate the usability of the derived interaction techniques and user experience of embodiedly creating composite visualizations. We find that empowering users to participate in composite visualizations through embodied interactions enables them to flexibly leverage different visualization representations for understanding and communicating the relationships between different views, which underscores the potential for a set of application scenarios in the future.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China","The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"qzhual@connect.ust.hk","is_corresponding":true,"name":"Qian Zhu"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States","Georgia Institute of Technology, Atlanta, United States"],"email":"luttul@umich.edu","is_corresponding":false,"name":"Tao Lu"},{"affiliations":["Adobe Research, San Jose, United States","Adobe Research, San Jose, United States"],"email":"sguo@adobe.com","is_corresponding":false,"name":"Shunan Guo"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong","Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"mxj@cse.ust.hk","is_corresponding":false,"name":"Xiaojuan Ma"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States","Georgia Institute of Technology, Atlanta, United States"],"email":"yalongyang@hotmail.com","is_corresponding":false,"name":"Yalong Yang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qian Zhu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1150","time_end":"","time_stamp":"","time_start":"","title":"CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments","uid":"v-full-1150","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Points of interest on a map such as restaurants, hotels, or subway stations, give rise to categorical point data: data that have a fixed location and one or more categorical attributes. Consequently, recent years have seen various set visualization approaches that visually connect points of the same category to support users in understanding the spatial distribution of categories. Existing methods use complex and often highly irregular shapes to connect points of the same category, leading to high cognitive load for the user. In this paper we introduce SimpleSets that use simple shapes to enclose categorical point patterns and provide a low-complexity overview of the data distribution. We give formal definitions of point patterns that correspond to simple shapes and describe an algorithm that partitions categorical points into few such patterns. Our second contribution is a rendering algorithm that transforms a given partition into a clean set of shapes resulting in an aesthetically pleasing set visualization. Our algorithm pays particular attention to resolving intersections between nearby shapes in a consistent manner. We compare SimpleSets to the state-of-the-art set visualizations using standard datasets from the literature. SimpleSets are designed to visualize disjoint categories, however, we discuss avenues to extend our technique to overlapping set systems.","accessible_pdf":false,"authors":[{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"s.w.v.d.broek@tue.nl","is_corresponding":true,"name":"Steven van den Broek"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"w.meulemans@tue.nl","is_corresponding":false,"name":"Wouter Meulemans"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"b.speckmann@tue.nl","is_corresponding":false,"name":"Bettina Speckmann"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Steven van den Broek"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1153","time_end":"","time_stamp":"","time_start":"","title":"SimpleSets: Capturing Categorical Point Patterns with Simple Shapes","uid":"v-full-1153","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Interactive visualizations are powerful tools for Exploratory Data Analysis (EDA), but how do they affect the observations analysts make about their data? We conducted a qualitative experiment with 13 professional data scientists analyzing two datasets within Jupyter notebooks, collecting a rich dataset of interaction traces and think-aloud utterances. By qualitatively analyzing participant verbalizations, we introduce the concept of \"observation-analysis states.\" These states capture both the dataset characteristics a participant focuses on and the insights they express. Our definition reveals that interactive visualizations on average lead to earlier and more complex insights about relationships between dataset attributes compared to static visualizations. Moreover, this process identified new measures for studying representation use in notebooks such as hover time, revisiting rate and representational diversity. In particular, revisiting rates revealed behavior where analysts revisit particular representations throughout the time course of an analysis, serving more as navigational aids through an EDA than as strict hypothesis answering tools. We show how these measures helped identify other patterns of analysis behavior, such as the \"80-20 rule\", where a small subset of representations drove the majority of observations. Based on these findings, we offer design guidelines for interactive exploratory analysis tooling and reflect on future directions for studying the role that visualizations play in EDA.","accessible_pdf":false,"authors":[{"affiliations":["MIT, Cambridge, United States"],"email":"dwootton@mit.edu","is_corresponding":true,"name":"Dylan Wootton"},{"affiliations":["MIT, Cambridge, United States"],"email":"amyraefoxphd@gmail.com","is_corresponding":false,"name":"Amy Rae Fox"},{"affiliations":["University of Colorado Boulder, Boulder, United States"],"email":"evan.peck@colorado.edu","is_corresponding":false,"name":"Evan Peck"},{"affiliations":["MIT, Cambridge, United States"],"email":"arvindsatya@mit.edu","is_corresponding":false,"name":"Arvind Satyanarayan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dylan Wootton"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1155","time_end":"","time_stamp":"","time_start":"","title":"Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.","uid":"v-full-1155","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful tools for solving complex optimization problems characterized by multiple, often conflicting, objectives. While advancements have been made in computational efficiency as well as diversity and convergence of solutions, a critical challenge persists: the internal evolutionary mechanisms are opaque to human users. Drawing upon the successes of explainable AI in explaining complex algorithms and models, we argue that the need to understand the underlying evolutionary operators and population dynamics in MOEAs aligns well with a visual analytics paradigm. This paper introduces ParetoTracker, a visual analytics framework designed to support the comprehension and inspection of population dynamics in the evolutionary processes of MOEAs. Informed by preliminary literature review and expert interviews, the framework establishes a multi-level analysis scheme, which caters to user engagement and exploration ranging from examining overall trends in performance metrics to conducting fine-grained inspections of evolutionary operations. In contrast to conventional practices that require manual plotting of solutions for each generation, ParetoTracker facilitates the examination of temporal trends and dynamics across consecutive generations in an integrated visual interface. The effectiveness of the framework is demonstrated through case studies and expert interviews focused on widely adopted benchmark optimization problems.","accessible_pdf":false,"authors":[{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"zhangzr32021@mail.sustech.edu.cn","is_corresponding":false,"name":"Zherui Zhang"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"yangf2020@mail.sustech.edu.cn","is_corresponding":false,"name":"Fan Yang"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"ranchengcn@gmail.com","is_corresponding":false,"name":"Ran Cheng"},{"affiliations":["Southern University of Science and Technology, Shenzhen, China"],"email":"mayx@sustech.edu.cn","is_corresponding":true,"name":"Yuxin Ma"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuxin Ma"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1179","time_end":"","time_stamp":"","time_start":"","title":"ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics","uid":"v-full-1179","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who are unfamiliar with these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn unfamiliar network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then mines the underlying data patterns, and eventually explains both visual and data patterns present in the viewer\u2019s selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to only textual and only visual (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology.","accessible_pdf":false,"authors":[{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":true,"name":"Xinhuan Shu"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"alexis.pister@hotmail.com","is_corresponding":false,"name":"Alexis Pister"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"tangjunxiu@zju.edu.cn","is_corresponding":false,"name":"Junxiu Tang"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":false,"name":"Fanny Chevalier"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xinhuan Shu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1185","time_end":"","time_stamp":"","time_start":"","title":"Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations","uid":"v-full-1185","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Emerging multimodal large language models (MLLMs) exhibit great potential for chart question answering (CQA). Recent efforts primarily focus on scaling up training datasets (\\ie, charts, data tables, and question-answer (QA) pairs) through data collection and synthesis. However, our empirical study on existing MLLMs and CQA datasets reveals notable gaps. First, current data collection and synthesis focus on data volume and lack consideration of fine-grained visual encodings and QA tasks, resulting in unbalanced data distribution divergent from practical CQA scenarios. Second, existing work follows the training recipe of the base MLLMs initially designed for natural images, under-exploring the adaptation to unique chart characteristics, such as rich text elements. To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. Specifically, we propose a novel data engine to effectively filter diverse and high-quality data from existing datasets and subsequently refine and augment the data using LLM-based generation techniques to better align with practical QA tasks and visual encodings. Then, to facilitate the adaptation to chart characteristics, we utilize the enriched data to train an MLLM by unfreezing the vision encoder and incorporating a mixture-of-resolution adaptation strategy for enhanced fine-grained recognition. Experimental results validate the effectiveness of our approach. Even with fewer training examples, our model consistently outperforms state-of-the-art CQA models on established benchmarks. We also contribute a dataset split as a benchmark for future research.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"xingchen.zeng@outlook.com","is_corresponding":true,"name":"Xingchen Zeng"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"hlin386@connect.hkust-gz.edu.cn","is_corresponding":false,"name":"Haichuan Lin"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"yyebd@connect.ust.hk","is_corresponding":false,"name":"Yilin Ye"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China","The Hong Kong University of Science and Technology, Hong Kong SAR, China"],"email":"weizeng@hkust-gz.edu.cn","is_corresponding":false,"name":"Wei Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xingchen Zeng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1193","time_end":"","time_stamp":"","time_start":"","title":"Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning","uid":"v-full-1193","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and visual data analysis tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with several variables including personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual's personality traits.","accessible_pdf":false,"authors":[{"affiliations":["Emory University, Atlanta, United States"],"email":"mengyu.chen@emory.edu","is_corresponding":true,"name":"Mengyu Chen"},{"affiliations":["Emory University, Atlanta, United States"],"email":"yijun.liu2@emory.edu","is_corresponding":false,"name":"Yijun Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mengyu Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1202","time_end":"","time_stamp":"","time_start":"","title":"Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis","uid":"v-full-1202","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present ProvenanceWidgets, a Javascript library of UI control elements such as radio buttons, checkboxes, and dropdowns to track and dynamically overlay a user's analytic provenance. These in situ overlays not only save screen space but also minimize the amount of time and effort needed to access the same information from elsewhere in the UI. In this paper, we discuss how we design modular UI control elements to track how often and how recently a user interacts with them and design visual overlays showing an aggregated summary as well as a detailed temporal history. We demonstrate the capability of ProvenanceWidgets by recreating three prior widget libraries: (1) Scented Widgets, (2) Phosphor objects, and (3) Dynamic Query Widgets. We also evaluated its expressiveness and conducted case studies with visualization developers to evaluate its effectiveness. We find that ProvenanceWidgets enables developers to implement custom provenance-tracking applications effectively. ProvenanceWidgets is available as open-source software at https://github.com/ProvenanceWidgets to help application developers build custom provenance-based systems.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"arpitnarechania@gatech.edu","is_corresponding":true,"name":"Arpit Narechania"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"kaustubhodak1@gmail.com","is_corresponding":false,"name":"Kaustubh Odak"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"melassady@ai.ethz.ch","is_corresponding":false,"name":"Mennatallah El-Assady"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"endert@gatech.edu","is_corresponding":false,"name":"Alex Endert"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arpit Narechania"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1204","time_end":"","time_stamp":"","time_start":"","title":"ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance","uid":"v-full-1204","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Graphs are often used to model relationships between entities. The identification and visualization of clusters in graphs enable insight discovery in many application areas, such as life sciences and social sciences. Force-directed graph layout algorithms promote the visual saliency of clusters, as they generally bring adjacent nodes closer together, and push non-adjacent nodes apart. In this work, we study the impact of node ordering on the visual saliency of clusters in orderable node-link diagrams, namely radial diagrams, arc diagrams and symmetric arc diagrams. Through a crowdsourced controlled experiment, we show that users can count clusters consistently more accurately, and to a large extent faster, with orderable node-link diagrams than with three state-of-the art force-directed layout algorithms, i.e., `Linlog', `Backbone' and, `sfdp'. The measured advantage is greater in case of low cluster separability and/or low compactness. A free copy of this paper and all supplemental materials are available at https://osf.io/kc3dg/?view_only=892f7b96752e40a6baefb2e50e866f9d","accessible_pdf":false,"authors":[{"affiliations":["Luxembourg Institute of Science and Technology, Esch-sur-Alzette, Luxembourg"],"email":"nora.alnaami@list.lu","is_corresponding":false,"name":"Nora Al-Naami"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"nicolas.medoc@list.lu","is_corresponding":false,"name":"Nicolas Medoc"},{"affiliations":["Uppsala University, Uppsala, Sweden"],"email":"matteo.magnani@it.uu.se","is_corresponding":false,"name":"Matteo Magnani"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"mohammad.ghoniem@list.lu","is_corresponding":true,"name":"Mohammad Ghoniem"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mohammad Ghoniem"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1214","time_end":"","time_stamp":"","time_start":"","title":"Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts","uid":"v-full-1214","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Placing text labels is a common way to explain key elements in a given scene. Given a graphic input and original label information, how to place labels to meet both geometric and aesthetic requirements is an open challenging problem. Geometry-wise, traditional rule-driven solutions struggle to capture the complex interactions between labels, let alone consider graphical/appearance content. In terms of aesthetics, training/evaluation data ideally require nontrivial effort and expertise in design, thus resulting in a lack of decent datasets for learning-based methods. To address the above challenges, we formulate the task with a graph representation, where nodes correspond to labels and edges to the between-label interactions, and treat label placement as a node position prediction problem. With this novel representation, we design a Label Placement Graph Transformer (LPGT) to predict label positions. Specifically, edge-level attention, conditioned on node representations, is introduced to reveal potential relationships between labels. To integrate graphic/image information, we design a feature aligning strategy that extracts deep features for nodes and edges efficiently. Next, to address the dataset issue, we collect commercial illustrations with professionally designed label layouts from household appliance manuals, and annotate them with useful information to create a novel dataset named the Appliance Manual Illustration Labels (AMIL) dataset. In the thorough evaluation on AMIL, our LPGT solution achieves promising label placement performance compared with popular baselines.","accessible_pdf":false,"authors":[{"affiliations":["Southwest University, Beibei, China"],"email":"qujingwei@swu.edu.cn","is_corresponding":true,"name":"Jingwei Qu"},{"affiliations":["Southwest University, Chongqing, China"],"email":"z2211973606@email.swu.edu.cn","is_corresponding":false,"name":"Pingshun Zhang"},{"affiliations":["Southwest University, Beibei, China"],"email":"enyuche@gmail.com","is_corresponding":false,"name":"Enyu Che"},{"affiliations":["COLLEGE OF COMPUTER AND INFORMATION SCIENCE, SOUTHWEST UNIVERSITY SCHOOL OF SOFTWAREC, Chongqin, China"],"email":"out1147205215@outlook.com","is_corresponding":false,"name":"Yinan Chen"},{"affiliations":["Stony Brook University, New York, United States"],"email":"hling@cs.stonybrook.edu","is_corresponding":false,"name":"Haibin Ling"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jingwei Qu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1218","time_end":"","time_stamp":"","time_start":"","title":"Graph Transformer for Label Placement","uid":"v-full-1218","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"How do cancer cells grow, divide, proliferate and die? How do drugs influence these processes? These are difficult questions that we can attempt to answer with a combination of time-series microscopy experiments, classification algorithms, and data visualization. However, collecting this type of data and applying algorithms to segment and track cells and construct lineages of proliferation is error-prone; and identifying the errors can be challenging since it often requires cross-checking multiple data types. Similarly, analyzing and communicating the results necessitates synthesizing different data types into a single narrative. State-of-the-art visualization methods for such data use independent line charts, tree diagrams, and images in separate views. However, this spatial separation requires the viewer of these charts to combine the relevant pieces of data in memory. To simplify this challenging task, we describe design principles for weaving cell images, time-series data, and tree data into a cohesive visualization. Our design principles are based on choosing a primary data type that drives the layout and integrates the other data types into that layout. We then introduce Aardvark, a system that uses these principles to implement novel visualization techniques. Based on Aardvark, we demonstrate the utility of each of these approaches for discovery, communication, and data debugging in a series of case studies.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"devin@sci.utah.edu","is_corresponding":true,"name":"Devin Lange"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"robert.judson-torres@hci.utah.edu","is_corresponding":false,"name":"Robert L Judson-Torres"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"tzangle@chemeng.utah.edu","is_corresponding":false,"name":"Thomas A Zangle"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"alex@sci.utah.edu","is_corresponding":false,"name":"Alexander Lex"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Devin Lange"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1232","time_end":"","time_stamp":"","time_start":"","title":"Aardvark: Composite Visualizations of Trees, Time-Series, and Images","uid":"v-full-1232","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks that lead to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook history, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only improves the reproducibility of notebooks, but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate utility and potential impact of our approach in two use cases and feedback from notebook users from a range of backgrounds.","accessible_pdf":false,"authors":[{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"klaus@eckelt.info","is_corresponding":true,"name":"Klaus Eckelt"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"kirangadhave2@gmail.com","is_corresponding":false,"name":"Kiran Gadhave"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"alex@sci.utah.edu","is_corresponding":false,"name":"Alexander Lex"},{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"marc.streit@jku.at","is_corresponding":false,"name":"Marc Streit"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Klaus Eckelt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1251","time_end":"","time_stamp":"","time_start":"","title":"Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks","uid":"v-full-1251","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"People commonly utilize visualizations not only to examine a given dataset, but also to draw generalizable conclusions about the underlying models or phenomena. Previous research has compared human visual inference to that of an optimal Bayesian agent, with deviations from rational analysis viewed as problematic. However, human reliance on non-normative heuristics may prove advantageous in certain circumstances. We investigate scenarios where human intuition might surpass idealized statistical rationality. In two experiments, we examine individuals' accuracy in characterizing the parameters of known data-generating models from bivariate visualizations. Our findings indicate that, although participants generally exhibited lower accuracy compared to statistical models, they frequently outperformed Bayesian agents, particularly when faced with extreme samples. Participants appeared to rely on their internal models to filter out noisy visualizations, thus improving their resilience against spurious data. However, participants displayed overconfidence and struggled with uncertainty estimation. They also exhibited higher variance than statistical machines. Our findings suggest that analyst gut reactions to visualizations may provide an advantage, even when departing from rationality. These results carry implications for designing visual analytics tools, offering new perspectives on how to integrate statistical models and analyst intuition for improved inference and decision-making.","accessible_pdf":false,"authors":[{"affiliations":["Indiana University, Indianapolis, United States"],"email":"rkoonch@iu.edu","is_corresponding":true,"name":"Ratanond Koonchanok"},{"affiliations":["Argonne National Laboratory, Lemont, United States","University of Illinois Chicago, Chicago, United States"],"email":"papka@anl.gov","is_corresponding":false,"name":"Michael E. Papka"},{"affiliations":["Indiana University, Indianapolis, United States"],"email":"redak@iu.edu","is_corresponding":false,"name":"Khairi Reda"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ratanond Koonchanok"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1256","time_end":"","time_stamp":"","time_start":"","title":"Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations","uid":"v-full-1256","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Providing effective guidance for users has long been an important and challenging task for efficient exploratory visual analytics, especially when selecting variables for visualization in high-dimensional datasets. Correlation is the most widely applied metric for guidance in statistical and analytical tools, however a reliance on correlation may lead users towards false positives when interpreting causal relations in the data. In this work, inspired by prior insights on the benefits of counterfactual visualization in supporting visual causal inference, we propose a novel, simple, and efficient counterfactual guidance method to enhance causal inference performance in guided exploratory analytics based on insights and concerns gathered from expert interviews. Our technique aims to capitalize on the benefits of counterfactual approaches while reducing their complexity for users. We integrated counterfactual guidance into an exploratory visual analytics system, and using a synthetically generated ground-truth causal dataset, conducted a comparative user study and evaluated to what extent counterfactual guidance can help lead users to more precise visual causal inferences. The results suggest that counterfactual guidance improved visual causal inference performance, and also led to different exploratory behaviors compared to correlation-based guidance. Based on these findings, we offer future directions to incorporate and examine counterfactual guidance to better support exploratory visual analytics.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":true,"name":"Arran Zeyu Wang"},{"affiliations":["UNC-Chapel Hill, Chapel Hill, United States"],"email":"borland@renci.org","is_corresponding":false,"name":"David Borland"},{"affiliations":["University of North Carolina, Chapel Hill, United States"],"email":"gotz@unc.edu","is_corresponding":false,"name":"David Gotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arran Zeyu Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1258","time_end":"","time_stamp":"","time_start":"","title":"Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis","uid":"v-full-1258","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In various scientific and industrial domains, analyzing multivariate spatial data, i.e., vectors associated with spatial locations, is common practice. To analyze those datasets, analysts may turn to models such as Spatial Blind Source Separation (SBSS). Designed explicitly for spatial data analysis, SBSS finds latent components in the dataset and is superior to popular non-spatial models, like PCA. However, when analysts try different tuning parameter settings, the amount of latent components complicates analytical tasks. Based on our years-long collaboration with SBSS researchers, we propose a visualization approach to tackle this challenge. The main component is UnDRground Tubes (UT), a general-purpose idiom combining ideas from set visualization and multidimensional projections. We describe the UT visualization pipeline and integrate UT into an interactive multiple-view system. We demonstrate its effectiveness through interviews with SBSS experts, a qualitative evaluation with visualization experts, and computational experiments. SBSS experts were excited about our approach. They saw many benefits for their work and potential applications for geostatistical data analysis more generally. UT was also very well received by visualization experts. Our benchmarks show that UT projections and its heuristics are appropriate.","accessible_pdf":false,"authors":[{"affiliations":["TU Wien, Vienna, Austria"],"email":"nikolaus.piccolotto@tuwien.ac.at","is_corresponding":true,"name":"Nikolaus Piccolotto"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"mwallinger@ac.tuwien.ac.at","is_corresponding":false,"name":"Markus Wallinger"},{"affiliations":["Institute of Visual Computing and Human-Centered Technology, Vienna, Austria"],"email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"markus.boegl@tuwien.ac.at","is_corresponding":false,"name":"Markus B\u00f6gl"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nikolaus Piccolotto"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1272","time_end":"","time_stamp":"","time_start":"","title":"UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization","uid":"v-full-1272","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We developed and validated an instrument to measure the perceived readability in data visualization: PREVis. Researchers and practitioners can easily use this instrument as part of their evaluations to compare the perceived readability of different visual data representations. Our instrument can complement results from controlled experiments on user task performance or provide additional data during in-depth qualitative work such as design iterations when developing a new technique. Although readability is recognized as an essential quality of data visualizations, so far there has not been a unified definition of the construct in the context of visual representations. As a result, researchers often lack guidance for determining how to ask people to rate their perceived readability of a visualization. To address this issue, we engaged in a rigorous process to develop the first validated instrument targeted at the subjective readability of visual data representations. Our final instrument consists of 11 items across 4 dimensions: understandability, layout clarity, readability of data values, and readability of data patterns. We provide the questionnaire as a document with implementation guidelines on osf.io/9cg8j. Beyond this instrument, we contribute a discussion of how researchers have previously assessed visualization readability, and an analysis of the factors underlying perceived readability in visual data representations.","accessible_pdf":false,"authors":[{"affiliations":["LISN, Universit\u00e9 Paris Saclay, CNRS, Orsay, France","Aviz, Inria, Saclay, France"],"email":"acabouat@gmail.com","is_corresponding":true,"name":"Anne-Flore Cabouat"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tingying.he@inria.fr","is_corresponding":false,"name":"Tingying He"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anne-Flore Cabouat"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1275","time_end":"","time_stamp":"","time_start":"","title":"PREVis: Perceived Readability Evaluation for Visualizations","uid":"v-full-1275","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.","accessible_pdf":false,"authors":[{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":true,"name":"Tushar M. Athawale"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"wangz@ornl.gov","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kmorel@acm.org","is_corresponding":false,"name":"Kenneth Moreland"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"gongq@ornl.gov","is_corresponding":false,"name":"Qian Gong"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"klasky@ornl.gov","is_corresponding":false,"name":"Scott Klasky"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"paul.rosen@utah.edu","is_corresponding":false,"name":"Paul Rosen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tushar M. Athawale"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1277","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models","uid":"v-full-1277","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Participatory budgeting (PB) is a democratic approach to allocating municipal spending that has been adopted in many places in recent years, including in Chicago. Current PB voting resembles a ballot where residents are asked which municipal projects, such as school improvements and road repairs, to fund with a limited budget. In this work, we ask how interactive visualization can benefit PB by conducting a design probe-based interview study (N=13) with policy workers and academics with expertise in PB, urban planning, and civic HCI. Our probe explores how graphical elicitation of voter preferences and a dashboard of voting statistics can be incorporated into a realistic PB tool. Through qualitative analysis, we find that visualization creates opportunities for city government to set expectations about budget constraints while also granting their constituents greater freedom to articulate a wider range of preferences. However, using visualization to provide transparency about PB requires efforts to mitigate potential access barriers and mistrust. We call for more visualization professionals to help build civic capacity by working in and studying political systems.","accessible_pdf":false,"authors":[{"affiliations":["University of Chicago, Chicago, United States"],"email":"kalea@uchicago.edu","is_corresponding":true,"name":"Alex Kale"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"danni6@uchicago.edu","is_corresponding":false,"name":"Danni Liu"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"mariagabrielaa@uchicago.edu","is_corresponding":false,"name":"Maria Gabriela Ayala"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"hwschwab@uchicago.edu","is_corresponding":false,"name":"Harper Schwab"},{"affiliations":["University of Washington, Seattle, United States","University of Utah, Salt Lake City, United States"],"email":"mcnutt.andrew@gmail.com","is_corresponding":false,"name":"Andrew M McNutt"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alex Kale"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1281","time_end":"","time_stamp":"","time_start":"","title":"What Can Interactive Visualization do for Participatory Budgeting in Chicago?","uid":"v-full-1281","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data tables are one of the most common ways in which people encounter data. Although mostly built with text and numbers, data tables have a spatial layout and often exhibit visual elements meant to facilitate their reading. Surprisingly, there is an empirical knowledge gap on how people read and use tables and how different visual aids affect people's ability to use them. In this work, we seek to address this vacuum through a controlled study. We asked participants to repeatedly perform four different tasks with tables in four table representation conditions (plain tables, tables with zebra striping, tables with cell background color encoding cell value, and tables with background bar length in a cell encoding cell value). We analyzed completion time, error rate, gaze-tracking data, mouse movement and participant preferences. We found that visual encodings help for finding maximum values (especially color), but not as much as zebra striping helps in a complex task (comparison of proportional differences). We also characterize typical human behavior for the different tasks. These findings can inform the design of tables and research directions for improving presentation of data in tabular form.","accessible_pdf":false,"authors":[{"affiliations":["University of Victoria, Victoria, Canada"],"email":"yongfengji@uvic.ca","is_corresponding":false,"name":"YongFeng Ji"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"nacenta@gmail.com","is_corresponding":false,"name":"Miguel A Nacenta"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Charles Perin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1288","time_end":"","time_stamp":"","time_start":"","title":"The Effect of Visual Aids on Reading Numeric Data Tables","uid":"v-full-1288","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization linters are end-user facing evaluators that automatically identify potential chart issues. These spell-checker like systems offer a blend of interpretability and customization that is not found in other forms of automated assistance. However, existing linters do not model context and have primarily targeted users who do not need assistance, resulting in obvious---even annoying---advice. We investigate these issues within the domain of color palette design, which serves as a microcosm of visualization design concerns. We contribute a GUI-based color palette linter as a design probe that covers perception, accessibility, context, and other design criteria, and use it to explore visual explanations, integrated fixes, and user-defined linting rules. Through a formative interview study and theory-driven analysis, we find that linters can be meaningfully integrated into graphical contexts thereby addressing many of their core issues. We discuss implications for integrating linters into visualization tools, developing improved assertion languages, and supporting end-user tunable advice---all laying the groundwork for more effective visualization linters in any context.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States","University of Utah, Salt Lake City, United States"],"email":"mcnutt.andrew@gmail.com","is_corresponding":true,"name":"Andrew M McNutt"},{"affiliations":["University of Washington, Seattle, United States"],"email":"maureen.stone@gmail.com","is_corresponding":false,"name":"Maureen Stone"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Andrew M McNutt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1290","time_end":"","time_stamp":"","time_start":"","title":"Mixing Linters with GUIs: A Color Palette Design Probe","uid":"v-full-1290","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Emotion is an important factor to consider when designing visualizations as it can impact the amount of trust viewers place in a visualization, how well they can retrieve information and understand the underlying data, and how much they engage with or connect to a visualization. We conducted five crowdsourced experiments to quantify the effects of color, chart type, data trend, data variability and data density on emotion (measured through self-reported arousal and valence). Results from our experiments show that there are multiple design elements which influence the emotion induced by a visualization and, more surprisingly, that certain data characteristics influence the emotion of viewers even when the data has no meaning. In light of these findings, we offer guidelines on how to use color, scale, and chart type to counterbalance and emphasize the emotional impact of immutable data characteristics.","accessible_pdf":false,"authors":[{"affiliations":["University of Waterloo, Waterloo, Canada","University of Victoria, Victoria, Canada"],"email":"cartergblair@gmail.com","is_corresponding":false,"name":"Carter Blair"},{"affiliations":["University of Victoria, Victoira, Canada","Delft University of Technology, Delft, Netherlands"],"email":"xiyao.wang23@gmail.com","is_corresponding":false,"name":"Xiyao Wang"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Charles Perin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1291","time_end":"","time_stamp":"","time_start":"","title":"Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations","uid":"v-full-1291","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Annotations play a vital role in highlighting critical aspects of visualizations, aiding in data externalization and exploration, collaborative data analysis, and visual storytelling. However, despite their widespread use, we identified a lack of a design space for common practices for annotations. In this paper, we evaluated over 1,800 static annotated charts to understand how people annotate visualizations in practice. Through qualitative coding of these diverse real-world annotated charts, we explore three primary aspects of annotation usage patterns: analytic purposes for chart annotations (e.g., present, identify, summarize, or compare data features), mechanisms for chart annotations (e.g., types and combinations of annotations used, frequency of different annotation types across chart types, etc.), and the data source used to generate the annotations. We then synthesized our findings into a design space of annotations, highlighting key design choices for chart annotations. We presented three case studies illustrating our design space as a practical framework for chart annotations to enhance the communication of visualization insights.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States","University of Utah, Salt Lake City, United States"],"email":"dilshadur@sci.utah.edu","is_corresponding":true,"name":"Md Dilshadur Rahman"},{"affiliations":["University of Oklahoma, Norman, United States","University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":false,"name":"Ghulam Jilani Quadri"},{"affiliations":["University of South Florida , Tampa, United States","University of South Florida , Tampa, United States"],"email":"bdoppalapudi@usf.edu","is_corresponding":false,"name":"Bhavana Doppalapudi"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States","University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"},{"affiliations":["University of Utah, Salt Lake City, United States","University of Utah, Salt Lake City, United States"],"email":"paul.rosen@utah.edu","is_corresponding":false,"name":"Paul Rosen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Md Dilshadur Rahman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1295","time_end":"","time_stamp":"","time_start":"","title":"A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space","uid":"v-full-1295","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present the results of an exploratory study on how pairs interact with speech commands and touch gestures on a wall-sized display during a collaborative sensemaking task. Previous work has shown that speech commands, alone or in combination with other input modalities, can support visual data exploration by individuals. However, it is still unknown whether and how speech commands can be used in collaboration, and for what tasks. To answer these questions, we developed a functioning prototype that we used as a technology probe. We conducted an in-depth exploratory study with 20 participants (10 pairs) to analyze their interaction choices, the interplay between the input modalities, and their collaboration. While touch was the most used modality, we found that participants preferred speech commands for global operations, used them for distant interaction, and that speech interaction contributed to the awareness of the partner\u2019s actions. Furthermore, the likelihood of using speech commands during collaboration was related to the personality trait of agreeableness. Regarding collaboration styles, participants interacted with speech equally often whether they were in loosely or closely coupled collaboration. While the partners stood closer to each other during close collaboration, they did not walk away from their partner to use speech commands. From our findings, we derive and contribute a set of design considerations for collaborative and multimodal interactive data analysis systems.","accessible_pdf":false,"authors":[{"affiliations":["University of Bremen, Bremen, Germany","University of Bremen, Bremen, Germany"],"email":"molina@uni-bremen.de","is_corresponding":true,"name":"Gabriela Molina Le\u00f3n"},{"affiliations":["LISN, Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"anastasia.bezerianos@universite-paris-saclay.fr","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":["Inria, Palaiseau, France"],"email":"olivier.gladin@inria.fr","is_corresponding":false,"name":"Olivier Gladin"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gabriela Molina Le\u00f3n"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1302","time_end":"","time_stamp":"","time_start":"","title":"Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics","uid":"v-full-1302","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Building information modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, building energy modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building\u2019s energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identification, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and understanding throughout the conversion process. By evaluating user feedback, we could show that BEMTrace can solve domain-specific tasks.","accessible_pdf":false,"authors":[{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"walch@vrvis.at","is_corresponding":false,"name":"Andreas Walch"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"szabo@vrvis.at","is_corresponding":false,"name":"Attila Szabo"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"hs@vrvis.at","is_corresponding":false,"name":"Harald Steinlechner"},{"affiliations":["Independent Researcher, Vienna, Austria"],"email":"thomas@ortner.fyi","is_corresponding":false,"name":"Thomas Ortner"},{"affiliations":["Institute of Visual Computing "," Human-Centered Technology, Vienna, Austria"],"email":"groeller@cg.tuwien.ac.at","is_corresponding":false,"name":"Eduard Gr\u00f6ller"},{"affiliations":["VRVis Zentrum f\u00fcr Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria"],"email":"johanna.schmidt@vrvis.at","is_corresponding":true,"name":"Johanna Schmidt"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Johanna Schmidt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1307","time_end":"","time_stamp":"","time_start":"","title":"BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM","uid":"v-full-1307","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualizations play a critical role in validating and improving statistical models. However, the design space of model check visualizations is not well understood, making it difficult for authors to explore and specify effective graphical model checks. VMC defines a model check visualization using four components: (1) samples of distributions of checkable quantities generated from the model, including predictive distributions for new data and distributions of model parameters; (2) transformations on observed data to facilitate comparison; (3) visual representations of distributions; and (4) layouts to facilitate comparing model samples and observed data. We contribute an implementation of VMC as an R package. We validate VMC by reproducing a set of canonical model check examples, and show how using VMC to generate model checks reduces the edit distance between visualizations relative to existing visualization toolkits. The findings of an interview study with three expert modelers who used VMC highlight challenges and opportunities for encouraging exploration of correct, effective model check visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"ziyangguo1030@gmail.com","is_corresponding":true,"name":"Ziyang Guo"},{"affiliations":["University of Chicago, Chicago, United States"],"email":"kalea@uchicago.edu","is_corresponding":false,"name":"Alex Kale"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"jhullman@northwestern.edu","is_corresponding":false,"name":"Jessica Hullman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ziyang Guo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1309","time_end":"","time_stamp":"","time_start":"","title":"VMC: A Grammar for Visualizing Statistical Model Checks","uid":"v-full-1309","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We apply an approach from cognitive linguistics by mapping Conceptual Metaphor Theory (CMT) to the visualization domain to address patterns of visual conceptual metaphors that are often used in science infographics. Metaphors play an essential part in visual communication and are frequently employed to explain complex concepts. However, their use is often based on intuition, rather than following a formal process. At present, we lack tools and language for understanding and describing metaphor use in visualization to the extent where taxonomy and grammar could guide the creation of visual components, e.g., infographics. Our classification of the visual conceptual mappings within scientific representations is based on the breakdown of visual components in existing scientific infographics. We demonstrate the development of this mapping through a detailed analysis of data collected from four domains (biomedicine, climate, space, and anthropology) that represent a diverse range of visual conceptual metaphors used in the visual communication of science. This work allows us to identify patterns of visual conceptual metaphor use within the domains, resolve ambiguities about why specific conceptual metaphors are used, and develop a better overall understanding of visual metaphor use in scientific infographics. Our analysis shows that ontological and orientational conceptual metaphors are the most widely applied to translate complex scientific concepts. To support our findings we developed a visual exploratory tool based on the collected database that places the individual infographics on a spatio-temporal scale and illustrates the breakdown of visual conceptual metaphors.","accessible_pdf":false,"authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"hana.pokojna@gmail.com","is_corresponding":true,"name":"Hana Pokojn\u00e1"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["University of Rostock, Rostock, Germany"],"email":"stefan.bruckner@gmail.com","is_corresponding":false,"name":"Stefan Bruckner"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"},{"affiliations":["University of Bergen, Bergen, Norway","Haukeland University Hospital, University of Bergen, Bergen, Norway"],"email":"laura.garrison@uib.no","is_corresponding":false,"name":"Laura Garrison"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hana Pokojn\u00e1"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1316","time_end":"","time_stamp":"","time_start":"","title":"The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling","uid":"v-full-1316","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models' analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments--from initial exploration to detailed analysis--we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates their applicability in addressing the pressing concern of misleading charts.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"yhload@cse.ust.hk","is_corresponding":true,"name":"Leo Yu-Ho Lo"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leo Yu-Ho Lo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1318","time_end":"","time_stamp":"","time_start":"","title":"How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations","uid":"v-full-1318","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Dynamic data visualizations can convey large amounts of information over time, such as using motion to depict changes in data values for multiple entities. Such dynamic displays put a demand on our visual processing capacities, yet our perception of motion is limited. When tracking multiple objects across space and time, humans can typically track up to four objects, and the capacity is even lower if we also need to remember the history of the objects\u2019 features. Several techniques have been shown to improve the processing of dynamic displays. Staging the animation to sequentially show steps in a transition and tracing object movement by displaying trajectory histories can increase processing by reducing the cognitive load. In this paper, We examine the effectiveness of staging and tracing in dynamic displays. We showed participants animated line charts depicting the movements of lines and asked them to identify the line with the highest mean and variance. We manipulated the animation to display the lines with or without staging, tracing and history, and compared the results to a static chart as a control. Results showed that tracing and staging are preferred by participants, and improve their performance in mean and variance tasks respectively. The preferred display time 3 times shorter when staging is used. Also, encoding animation speed with mean and variance in congruent tasks is associated with higher accuracy. These findings help inform real-world best practices for building dynamic displays that leverage the strength of humans' visual processing.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"shu343@gatech.edu","is_corresponding":true,"name":"Songwen Hu"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"ouxunjiang@u.northwestern.edu","is_corresponding":false,"name":"Ouxun Jiang"},{"affiliations":["Dolby Laboratories Inc., San Francisco, United States"],"email":"jcr@dolby.com","is_corresponding":false,"name":"Jeffrey Riedmiller"},{"affiliations":["Georgia Tech, Atlanta, United States","University of Massachusetts Amherst, Amherst, United States"],"email":"cxiong@gatech.edu","is_corresponding":false,"name":"Cindy Xiong Bearfield"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Songwen Hu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1325","time_end":"","time_stamp":"","time_start":"","title":"Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series","uid":"v-full-1325","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Evaluating the quality of text responses generated by large language models (LLMs) poses unique challenges compared to traditional machine learning. While automatic side-by-side evaluation has emerged as a promising approach, LLM developers face scalability and interpretability challenges in analyzing these evaluation results. In this paper, we present LLM Comparator, a novel visual analytics tool for interactively analyzing results from side-by-side evaluation of LLMs. The tool provides users with interactive workflows to understand when and why a model performs better or worse than a baseline model, and how the responses from two models differ qualitatively. We iteratively designed and developed the tool by closely working with researchers and engineers at a large technology company. Qualitative feedback from users highlights that the tool facilitates in-depth analysis of individual examples while enabling users to visually overview and flexibly slice data. This empowers users to identify undesirable patterns, formulate hypotheses about model behavior, and gain insights for model improvement.","accessible_pdf":false,"authors":[{"affiliations":["Google, Atlanta, United States"],"email":"minsuk.kahng@gmail.com","is_corresponding":true,"name":"Minsuk Kahng"},{"affiliations":["Google Research, Seattle, United States"],"email":"iftenney@google.com","is_corresponding":false,"name":"Ian Tenney"},{"affiliations":["Google Research, Cambridge, United States"],"email":"mahimap@google.com","is_corresponding":false,"name":"Mahima Pushkarna"},{"affiliations":["Google Research, Pittsburgh, United States"],"email":"lxieyang.cmu@gmail.com","is_corresponding":false,"name":"Michael Xieyang Liu"},{"affiliations":["Google Research, Cambridge, United States"],"email":"jwexler@google.com","is_corresponding":false,"name":"James Wexler"},{"affiliations":["Google, Cambridge, United States"],"email":"ereif@google.com","is_corresponding":false,"name":"Emily Reif"},{"affiliations":["Google Research, Mountain View, United States"],"email":"kallarackal@google.com","is_corresponding":false,"name":"Krystal Kallarackal"},{"affiliations":["Google Research, Seattle, United States"],"email":"minsuk.cs@gmail.com","is_corresponding":false,"name":"Minsuk Chang"},{"affiliations":["Google, Cambridge, United States"],"email":"michaelterry@google.com","is_corresponding":false,"name":"Michael Terry"},{"affiliations":["Google, Paris, France"],"email":"ldixon@google.com","is_corresponding":false,"name":"Lucas Dixon"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Minsuk Kahng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1326","time_end":"","time_stamp":"","time_start":"","title":"LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models","uid":"v-full-1326","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students' learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students' interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master's level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students' interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT's responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system's effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz's capacity to enhance educators' insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"zchendf@connect.ust.hk","is_corresponding":true,"name":"Zixin Chen"},{"affiliations":["The Hong Kong University of Science and Technology, Sai Kung, China"],"email":"csejiachenw@ust.hk","is_corresponding":false,"name":"Jiachen Wang"},{"affiliations":["Texas A","M University, College Station, United States"],"email":"xiameng9355@gmail.com","is_corresponding":false,"name":"Meng Xia"},{"affiliations":["The Hong Kong University of Science and Technology, Kowloon, Hong Kong"],"email":"kshigyo@connect.ust.hk","is_corresponding":false,"name":"Kento Shigyo"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"dliuak@connect.ust.hk","is_corresponding":false,"name":"Dingdong Liu"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"rzhangab@connect.ust.hk","is_corresponding":false,"name":"Rong Zhang"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zixin Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1329","time_end":"","time_stamp":"","time_start":"","title":"StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions","uid":"v-full-1329","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Translating natural language to visualization (NL2VIS) has shown great promise for visual data analysis, but it remains a challenging task that requires multiple low-level implementations, such as natural language processing and visualization design. Recent advancements in pre-trained large language models (LLMs) are opening new avenues for generating visualizations from natural language. However, the lack of a comprehensive and reliable benchmark hinders our understanding of LLMs\u2019 capabilities in visualization generation. In this paper, we address this gap by proposing a new NL2VIS benchmark called VisEval. Firstly, we introduce a high-quality and large-scale dataset. This dataset includes 2,524 representative queries covering 146 databases, paired with accurately labeled ground truths. Secondly, we advocate for a comprehensive automated evaluation methodology covering multiple dimensions, including validity, legality, and readability. By systematically scanning for potential issues with a number of heterogeneous checkers, VisEval provides reliable and trustworthy evaluation outcomes. We run VisEval on a series of state-of-the-art LLMs. Our evaluation reveals prevalent challenges and delivers essential insights for future advancements.","accessible_pdf":false,"authors":[{"affiliations":["Microsoft Research, Shanghai, China"],"email":"christy05.chen@gmail.com","is_corresponding":true,"name":"Nan Chen"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"scottyugochang@gmail.com","is_corresponding":false,"name":"Yuge Zhang"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"jiahangxu@microsoft.com","is_corresponding":false,"name":"Jiahang Xu"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"rk.ren@outlook.com","is_corresponding":false,"name":"Kan Ren"},{"affiliations":["Microsoft Research, Shanghai, China"],"email":"yuqyang@microsoft.com","is_corresponding":false,"name":"Yuqing Yang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nan Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1332","time_end":"","time_stamp":"","time_start":"","title":"VisEval: A Benchmark for Data Visualization in the Era of Large Language Models","uid":"v-full-1332","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data videos increasingly becoming a popular data storytelling form represented by visual and audio integration. In recent years, more and more researchers have explored many narrative structures for effective and attractive data storytelling. Meanwhile, the Hero's Journey provides a classic narrative framework specific to the Hero's story that has been adopted by various mediums. There are continuous discussions about applying Hero's Journey to data stories. However, so far, little systematic and practical guidance on how to create a data video for a specific story type like the Hero's Journey, as well as how to manipulate its sound and visual designs simultaneously. To fulfill this gap, we first identified 48 data videos aligned with the Hero's Journey as the common storytelling from 109 high-quality data videos. Then, we examined how existing practices apply Hero's Journey for creating data videos. We coded the 48 data videos in terms of the narrative stages, sound design, and visual design according to the Hero's Journey structure. Based on our findings, we proposed a design space to provide practical guidance on the narrative, visual, and sound custom design for different narrative segments of the hero's journey (i.e., Departure, Initiation, Return) through data video creation. To validate our proposed design space, we conducted a user study where 20 participants were invited to design data videos with and without our design space guidance, which was evaluated by two experts. Results show that our design space provides useful and practical guidance for data storytellers effectively creating data videos with the Hero's Journey.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Guangzhou, China"],"email":"zwei302@connect.hkust-gz.edu.cn","is_corresponding":true,"name":"Zheng Wei"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"xxubq@connect.ust.hk","is_corresponding":false,"name":"Xian Xu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zheng Wei"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1333","time_end":"","time_stamp":"","time_start":"","title":"Telling Data Stories with the Hero\u2019s Journey: Design Guidance for Creating Data Videos","uid":"v-full-1333","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Genomics experts rely on visualization to extract and share insights from complex and large-scale datasets. Beyond off-the-shelf tools for data exploration, there is an increasing need for platforms that aid experts in authoring customized visualizations for both exploration and communication of insights. A variety of interactive techniques have been proposed for authoring data visualizations, such as template editing, shelf configuration, natural language input, and code editors. However, it remains unclear how genomics experts create visualizations and which techniques best support their visualization tasks and needs. To address this gap, we conducted two user studies with genomics researchers: (1) semi-structured interviews (n=20) to identify the tasks, user contexts, and current visualization authoring techniques and (2) an exploratory study (n=13) using visual probes to elicit users\u2019 intents and desired techniques when creating visualizations. Our contributions include (1) a characterization of how visualization authoring is currently utilized in genomics visualization, identifying limitations and benefits in light of common criteria for authoring tools, and (2) generalizable and actionable design implications for genomics visualization authoring tools based on our findings on task- and user-specific usefulness of authoring techniques.","accessible_pdf":false,"authors":[{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.v.d.brandt@tue.nl","is_corresponding":true,"name":"Astrid van den Brandt"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"sehi_lyi@hms.harvard.edu","is_corresponding":false,"name":"Sehi L'Yi"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"huyen_nguyen@hms.harvard.edu","is_corresponding":false,"name":"Huyen N. Nguyen"},{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.vilanova@tue.nl","is_corresponding":false,"name":"Anna Vilanova"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Astrid van den Brandt"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1342","time_end":"","time_stamp":"","time_start":"","title":"Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks","uid":"v-full-1342","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"As basketball\u2019s popularity surges, fans often find themselves confused and overwhelmed by the rapid game pace and complexity. Basketball tactics, involving a complex series of actions, require substantial knowledge to be fully understood. This complexity leads to a need for additional information and explanation, which can distract fans from the game. To tackle these challenges, we present Sportify, a Visual Question Answering system that integrates narratives and embedded visualization for demystifying basketball tactical questions, aiding fans in understanding various game aspects. We propose three novel action visualizations (i.e., Pass, Cut, and Screen) to demonstrate critical action sequences. To explain the reasoning and logic behind players\u2019 actions, we leverage a large-language model (LLM) to generate narratives. We adopt a storytelling approach for complex scenarios from both first and third-person perspectives, integrating action visualizations. We evaluated Sportify with basketball fans to investigate its impact on understanding of tactics, and how different personal perspectives of narratives impact the understanding of complex tactic with action visualizations. Our evaluation with basketball fans demonstrates Sportify\u2019s capability to deepen tactical insights and amplify the viewing experience. Furthermore, third-person narration assists people in getting in-depth game explanations while first-person narration enhances fans\u2019 game engagement.","accessible_pdf":false,"authors":[{"affiliations":["Harvard University, Allston, United States"],"email":"chungyi347@gmail.com","is_corresponding":true,"name":"Chunggi Lee"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"mlin@g.harvard.edu","is_corresponding":false,"name":"Tica Lin"},{"affiliations":["University of Minnesota-Twin Cities, Minneapolis, United States"],"email":"ztchen@umn.edu","is_corresponding":false,"name":"Chen Zhu-Tian"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"pfister@seas.harvard.edu","is_corresponding":false,"name":"Hanspeter Pfister"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chunggi Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1351","time_end":"","time_stamp":"","time_start":"","title":"Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video","uid":"v-full-1351","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data visualization aids in making data analysis more intuitive and in-depth, with widespread applications in fields such as biology, finance, and medicine. For massive and continuously growing streaming time series data, these data are typically visualized in the form of line charts, but the data transmission puts significant pressure on the network, leading to visualization lag or even fail to render completely. This paper proposes a universal sampling algorithm FPCS, which retains feature points from continuously received streaming time series data, compensates for the frequent fluctuating feature points, and aims to achieve efficient visualization. This algorithm bridges the gap in sampling for streaming time series data. The algorithm has several advantages: (1) It optimizes the sampling results by compensating for fewer feature points, retaining the visualization features of the original data very well, ensuring high-quality sampled data; (2) The execution time is the shortest compared to similar existing algorithms; (3) It has an almost negligible space overhead; (4) The data sampling process does not depend on the overall data; (5) This algorithm can be applied to infinite streaming data and finite static data.","accessible_pdf":false,"authors":[{"affiliations":["China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China"],"email":"3271961659@qq.com","is_corresponding":true,"name":"Hongyan Li"},{"affiliations":["China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China"],"email":"ustcboy@outlook.com","is_corresponding":false,"name":"Bo Yang"},{"affiliations":["China Nanhu Academy of Electronics and Information Technology, Jiaxing, China"],"email":"caiyansong@cnaeit.com","is_corresponding":false,"name":"Yansong Chua"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hongyan Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1363","time_end":"","time_stamp":"","time_start":"","title":"FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data","uid":"v-full-1363","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Synthetic Lethal (SL) relationships, although rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there remains a persistent need among domain experts for interpretive paths and mechanism explorations that better harmonize with domain-specific knowledge, particularly due to the significant costs involved in experimentation. To address this gap, we propose an iterative Human-AI collaborative framework comprising two key components: 1)Human-Engaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2)Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids domain experts in organizing and comparing prediction results and interpretive paths across different granularities, thereby uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, thereby enhancing expert involvement and intervention to build trust. This framework, facilitated by SLInterpreter, ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. Subsequently, we evaluate the efficacy of the framework through a case study and expert interviews.","accessible_pdf":false,"authors":[{"affiliations":["Shanghaitech University, Shanghai, China"],"email":"jianghr2023@shanghaitech.edu.cn","is_corresponding":true,"name":"Haoran Jiang"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"shishh2023@shanghaitech.edu.cn","is_corresponding":false,"name":"Shaohan Shi"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"zhangshh2@shanghaitech.edu.cn","is_corresponding":false,"name":"Shuhao Zhang"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"zhengjie@shanghaitech.edu.cn","is_corresponding":false,"name":"Jie Zheng"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haoran Jiang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1368","time_end":"","time_stamp":"","time_start":"","title":"SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction","uid":"v-full-1368","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In volume visualization, visualization synthesis has attracted much attention due to its ability to generate novel visualizations without following the conventional rendering pipeline. However, existing solutions based on generative adversarial networks often require many training images and take significant training time. Still, issues such as low quality, consistency, and flexibility persist. This paper introduces StyleRF-VolVis, an innovative style transfer framework for expressive volume visualization (VolVis) via neural radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its ability to accurately separate the underlying scene geometry (i.e., content) and color appearance (i.e., style), conveniently modify color, opacity, and lighting of the original rendering while maintaining visual content consistency across the views, and effectively transfer arbitrary styles from reference images to the reconstructed 3D scene. To achieve these, we design a base NeRF model for scene geometry extraction, a palette color network to classify regions of the radiance field for photorealistic editing, and an unrestricted color network to lift the color palette constraint via knowledge distillation for non-photorealistic editing. We demonstrate the superior quality, consistency, and flexibility of StyleRF-VolVis by experimenting with various volume rendering scenes and reference images and comparing StyleRF-VolVis against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF and SNeRF) style rendering solutions.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"ktang2@nd.edu","is_corresponding":true,"name":"Kaiyuan Tang"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kaiyuan Tang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1391","time_end":"","time_stamp":"","time_start":"","title":"StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization","uid":"v-full-1391","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper discusses challenges and design strategies in responsive design for thematic maps in information visualization. Thematic maps pose a number of unique challenges for responsiveness, such as inflexible aspect ratios that do not easily adapt to varying screen dimensions, or densely clustered visual elements in urban areas becoming illegible at smaller scales. However, design guidance on how to best address these issues is currently lacking. We conducted design sessions with eight professional designers and developers of web-based thematic maps for information visualization. Participants were asked to redesign a given map for various screen sizes and aspect ratios and to describe their reasoning for when and how they adapted the design. We report general observations of practitioners\u2019 motivations, decision-making processes, and personal design frameworks. We then derive seven challenges commonly encountered in responsive map design, and 17 strategies to address them, such as repositioning elements, segmenting the map, or using alternative visualizations. We compile these challenges and strategies into an illustrated cheat sheet targeted at anyone designing or learning to design responsive maps. The cheat sheet is available online: https://responsive-vis.github.io/map-cheat-sheet.","accessible_pdf":false,"authors":[{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"sarah.schoettler@ed.ac.uk","is_corresponding":true,"name":"Sarah Sch\u00f6ttler"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"uhinrich@ed.ac.uk","is_corresponding":false,"name":"Uta Hinrichs"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sarah Sch\u00f6ttler"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1393","time_end":"","time_stamp":"","time_start":"","title":"Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts","uid":"v-full-1393","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents discursive patinas, a technique to visualize discussions onto data visualizations, inspired by how people leave traces in the physical world. While data visualizations are widely discussed in online communities and social media, comments tend to be displayed separately from the visualization. We lack ways to relate these discussions to the content of the visualization, e.g., to situate comments, explain visual patterns, or question assumptions. In our visualization annotation interface, users can designate areas within the visualization to, e.g., highlight specific visual marks (anchors), attach textual comments, and add category labels, likes, and replies. By coloring and styling these designated areas, a meta visualization emerges, showing what and where people comment and annotate. These patinas show regions of heavy discussions, recent commenting activity, and the distribution of questions, suggestions, or personal stories. To study how people use anchors to discuss visualizations and understand if and how information in patinas influence people's understanding of the discussion, we ran workshops with 90 participants including students, domain experts, and visualization researchers. Our results show that discursive patinas improve the ability to navigate discussions and guide people to comments that help understand, contextualize, or scrutinize the visualization. We discuss the potential of the technique to support discursive engagements, including critical readings of visualizations, design feedback, and feminist approaches to data visualization.","accessible_pdf":false,"authors":[{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom","Potsdam University of Applied Sciences, Potsdam, Germany"],"email":"tobias.kauer@fh-potsdam.de","is_corresponding":true,"name":"Tobias Kauer"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"derya.akbaba@liu.se","is_corresponding":false,"name":"Derya Akbaba"},{"affiliations":["University of Applied Sciences Potsdam, Potsdam, Germany"],"email":"doerk@fh-potsdam.de","is_corresponding":false,"name":"Marian D\u00f6rk"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias Kauer"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1394","time_end":"","time_stamp":"","time_start":"","title":"Discursive Patinas: Anchoring Discussions in Data Visualizations","uid":"v-full-1394","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Onboarding a user to a visualization dashboard entails explaining its various components, including the chart types used, the data loaded, and the interactions provided. Authoring such an onboarding experience is time-consuming and requires significant knowledge, and little guidance exists on how best to do this. End-users being onboarded to a new dashboard can be either confused and overwhelmed, or disinterested and disengaged, depending on the user\u2019s expertise. We propose interactive dashboard tours (d-tours) as semi-automated onboarding experiences for variable user expertise that preserve the user\u2019s agency, interest, and engagement. Our interactive tours concept draws from open-world game design to give the user freedom in choosing their path in the onboarding. We have implemented the concept in a tool called D-TOUR PROTOTYPE that allows authors to craft custom and interactive dashboard tours from scratch or using automatic templates. Automatically generated tours can still be customized to use different media (such as video, audio, or highlighting) or new narratives to produce a tailored onboarding experience for individual users or groups. We demonstrate the usefulness of interactive dashboard tours through use cases and expert interviews. The evaluation shows that the authors find the automation in the DTour prototype helpful and time-saving and the users find it engaging and intuitive. This paper and all supplemental materials are available at \\url{https://osf.io/6fbjp/}.","accessible_pdf":false,"authors":[{"affiliations":["Pro2Future GmbH, Linz, Austria","Johannes Kepler University, Linz, Austria"],"email":"vaishali.dhanoa@pro2future.at","is_corresponding":true,"name":"Vaishali Dhanoa"},{"affiliations":["Johannes Kepler University, Linz, Austria"],"email":"andreas.hinterreiter@jku.at","is_corresponding":false,"name":"Andreas Hinterreiter"},{"affiliations":["Johannes Kepler University, Linz, Austria"],"email":"vanessa.fediuk@jku.at","is_corresponding":false,"name":"Vanessa Fediuk"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"},{"affiliations":["Institute of Visual Computing "," Human-Centered Technology, Vienna, Austria"],"email":"groeller@cg.tuwien.ac.at","is_corresponding":false,"name":"Eduard Gr\u00f6ller"},{"affiliations":["Johannes Kepler University Linz, Linz, Austria"],"email":"marc.streit@jku.at","is_corresponding":false,"name":"Marc Streit"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Vaishali Dhanoa"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1395","time_end":"","time_stamp":"","time_start":"","title":"D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding","uid":"v-full-1395","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization designers often rely on examples to explore the space of possible designs, yet we have little insight into how examples shape data visualization design outcomes. While the effects of examples have been studied in other disciplines, such as web design or engineering, the results are not readily applicable to visualization design due to inconsistencies in findings and challenges unique to visualization design. Towards bridging this gap, we conduct an exploratory experiment involving 32 data visualization designers focusing on the influence of five factors (timing, quantity, diversity, data topic similarity, and data schema similarity) on objectively measurable design outcomes (e.g., numbers of designs and idea transfers). Our quantitative analysis shows that when examples are introduced after initial brainstorming, designers curate examples with topics less similar to the dataset they are working on and produce more designs with a high variation in visualization components. Also, designers copy more ideas from examples with higher data schema similarities. Our qualitative analysis of participants\u2019 thought processes provides insights into why designers incorporate examples into their designs, revealing potential factors that have not been previously investigated. Finally, we discuss how our results inform future work on quantifying designs, improving measures of effectiveness, and supporting example-based visualization design. All supplementary materials are available at https://osf.io/sbp2k/?view_only=ca14af497f5845a0b1b2c616699fefc5","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"hbako@umd.edu","is_corresponding":true,"name":"Hannah K. Bako"},{"affiliations":["The University of Texas at Austin, Austin, United States"],"email":"xinyi.liu@utexas.edu","is_corresponding":false,"name":"Xinyi Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"gko1@terpmail.umd.edu","is_corresponding":false,"name":"Grace Ko"},{"affiliations":["Human Data Interaction Lab, College Park, United States"],"email":"hsong02@cs.umd.edu","is_corresponding":false,"name":"Hyemi Song"},{"affiliations":["University of Washington, Seattle, United States"],"email":"leibatt@cs.washington.edu","is_corresponding":false,"name":"Leilani Battle"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hannah K. Bako"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1414","time_end":"","time_stamp":"","time_start":"","title":"Unveiling How Examples Shape Data Visualization Design Outcomes","uid":"v-full-1414","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Various data visualization downstream applications such as reverse engineering and interactive authoring require a vocabulary that describes the structure of visualization scenes and the procedure to manipulate them. A few scene abstractions have been proposed, but they are restricted to specific applications for a limited set of visualization types. A unified and expressive model of data visualization scenes for different downstream applications has been missing. To fill this gap, we present Manipulable Semantic Components (MSC), a computational representation of data visualization scenes, to support applications in scene understanding and augmentation. MSC consists of two parts: a unified object model describing the structure of a visualization scene in terms of semantic components, and a set of operations to generate and modify the scene components. We demonstrate the benefits of MSC in three applications: visualization authoring, visualization deconstruction and reuse, and animation specification.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":true,"name":"Zhicheng Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"cchen24@umd.edu","is_corresponding":false,"name":"Chen Chen"},{"affiliations":["University of Maryland, College Park, United States"],"email":"hookerj100@gmail.com","is_corresponding":false,"name":"John Hooker"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zhicheng Liu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1416","time_end":"","time_stamp":"","time_start":"","title":"Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes","uid":"v-full-1416","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization items\u2014factual questions about visualizations that ask viewers to accomplish visualization tasks\u2014are regularly used in the field of information visualization as educational and evaluative materials. For example, researchers of visualization literacy require large, diverse banks of items to conduct studies where the same skill is measured repeatedly on the same participants. Yet, generating a large number of high-quality, diverse items requires significant time and expertise. To address the critical need for a large number of diverse visualization items in education and research, this paper investigates the potential for large language models (LLMs) to automate the generation of multiple-choice visualization items. Through an iterative design process, we develop an LLM-based pipeline, the VILA (Visualization Items Generated by Large LAnguage Models) pipeline, for efficiently generating visualization items that measure people\u2019s ability to accomplish visualization tasks. We use the VILA pipeline to generate 1,404 candidate items across 12 chart types and 13 visualization tasks. In collaboration with 11 visualization experts, we develop an evaluation rulebook which we then use to rate the quality of all candidate items. The result is a final bank, the VILA bank, of \u223c1,100 items. From this evaluation, we also identify and classify current limitations of LLMs in generating visualization items, and discuss the role of human oversight in ensuring quality. In addition, we demonstrate an application of our work by creating a visualization literacy test, VILA-VLAT, which measures people\u2019s ability to complete a diverse set of tasks on various types of visualizations; to show the potential of this application, we assess the convergent validity of VILA-VLAT by comparing it to the existing test VLAT via an online study (R = 0.70). Lastly, we discuss the application areas of the VILA pipeline and the VILA bank and provide practical recommendations for their use. All supplemental materials are available at https://osf.io/ysrhq/?view_only=e31b3ddf216e4351bb37bcedf744e9d6.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"yuancui2025@u.northwestern.edu","is_corresponding":true,"name":"Yuan Cui"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"wanqian.ge@northwestern.edu","is_corresponding":false,"name":"Lily W. Ge"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"yding5@wpi.edu","is_corresponding":false,"name":"Yiren Ding"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ltharrison@wpi.edu","is_corresponding":false,"name":"Lane Harrison"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"fumeng.p.yang@gmail.com","is_corresponding":false,"name":"Fumeng Yang"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuan Cui"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1422","time_end":"","time_stamp":"","time_start":"","title":"Promises and Pitfalls: Using Large Language Models to Generate Visualization Items","uid":"v-full-1422","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Comics have been shown to be an effective method for sequential data-driven storytelling, especially for dynamic graphs that change over time. However, manually creating a data-driven comic for a dynamic graph is currently time-consuming, complex, and error-prone. In this paper, we propose DG Comics, a novel comic authoring tool for dynamic graphs that allows users to semi-automatically build the comic and annotate it. The tool uses a hierarchical clustering algorithm that we newly developed for segmenting consecutive snapshots of the dynamic graph while preserving their chronological order. It also provides rich information on both individuals and communities extracted from dynamic graphs in multiple views, where users can explore dynamic graphs and choose what to tell in comics. For evaluation, we provide an example and report results from a user study and expert review.","accessible_pdf":false,"authors":[{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"joohee@unist.ac.kr","is_corresponding":true,"name":"Joohee Kim"},{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"gusdnr0916@unist.ac.kr","is_corresponding":false,"name":"Hyunwook Lee"},{"affiliations":["Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of"],"email":"ducnm@unist.ac.kr","is_corresponding":false,"name":"Duc M. Nguyen"},{"affiliations":["Australian National University, Canberra, Australia"],"email":"minjeong.shin@anu.edu.au","is_corresponding":false,"name":"Minjeong Shin"},{"affiliations":["IBM Research, Cambridge, United States"],"email":"bumchul.kwon@us.ibm.com","is_corresponding":false,"name":"Bum Chul Kwon"},{"affiliations":["UNIST, Ulsan, Korea, Republic of"],"email":"sako@unist.ac.kr","is_corresponding":false,"name":"Sungahn Ko"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Joohee Kim"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1425","time_end":"","time_stamp":"","time_start":"","title":"DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs","uid":"v-full-1425","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Numerical simulation serves as a cornerstone in scientific modeling, yet the process of fine-tuning simulation parameters poses significant challenges. Conventionally, parameter adjustment relies on extensive numerical simulations, data analysis, and expert insights, resulting in substantial computational costs and low efficiency. The emergence of deep learning in recent years has provided promising avenues for more efficient exploration of parameter spaces. However, existing approaches often lack intuitive methods for precise parameter adjustment and optimization. To tackle these challenges, we introduce ParamsDrag, a model that facilitates parameter space exploration through direct interaction with visualizations. Inspired by DragGAN, our ParamsDrag model operates in three steps. First, the generative component of ParamsDrag generates visualizations based on the input simulation parameters. Second, by directly dragging structure-related features in the visualizations, users can intuitively understand the controlling effect of different parameters. Third, with the understanding from the earlier step, users can steer ParamsDrag to produce dynamic visual outcomes. Through experiments conducted on real-world simulations and comparisons with state-of-the-art deep learning based approaches, we demonstrate the efficacy of our solution.","accessible_pdf":false,"authors":[{"affiliations":["Computer Network Information Center, Chinese Academy of Sciences, Beijing, China","University of Chinese Academy of Sciences, Beijing, China"],"email":"liguan@sccas.cn","is_corresponding":true,"name":"Guan Li"},{"affiliations":["Beijing Forestry University, Beijing, China"],"email":"leo_edumail@163.com","is_corresponding":false,"name":"Yang Liu"},{"affiliations":["Computer Network Information Center, Chinese Academy of Sciences, Beijing, China"],"email":"sgh@sccas.cn","is_corresponding":false,"name":"Guihua Shan"},{"affiliations":["Chinese Academy of Sciences, Beijing, China"],"email":"chengshiyu@cnic.cn","is_corresponding":false,"name":"Shiyu Cheng"},{"affiliations":["Beijing Forestry University, Beijing, China"],"email":"weiqun.cao@126.com","is_corresponding":false,"name":"Weiqun Cao"},{"affiliations":["Visa Research, Palo Alto, United States"],"email":"junpeng.wang.nk@gmail.com","is_corresponding":false,"name":"Junpeng Wang"},{"affiliations":["National Taiwan Normal University, Taipei City, Taiwan"],"email":"caseywang777@gmail.com","is_corresponding":false,"name":"Ko-Chih Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Guan Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1427","time_end":"","time_stamp":"","time_start":"","title":"ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging","uid":"v-full-1427","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Differential privacy ensures the security of individual privacy but poses challenges to data exploration processes because the limited privacy budget incapacitates the flexibility of exploration and the noisy feedback of data requests leads to confusing uncertainty. In this study, we take the lead in describing corresponding exploration scenarios, including underlying requirements and available exploration strategies. To facilitate practical applications, we propose a visual analysis approach to the formulation of exploration strategies. Our approach applies a reinforcement learning model to provide diverse suggestions for exploration strategies according to the exploration intent of users. A novel visual design for representing uncertainty in correlation patterns is integrated into our prototype system to support the proposed approach. Finally, we implemented a user study and two case studies. The results of these studies verified that our approach can help develop strategies that satisfy the exploration intent of users.","accessible_pdf":false,"authors":[{"affiliations":["Nankai University, Tianjin, China"],"email":"wangxumeng@nankai.edu.cn","is_corresponding":true,"name":"Xumeng Wang"},{"affiliations":["Nankai University, Tianjin, China"],"email":"jiaoshuangcheng@mail.nankai.edu.cn","is_corresponding":false,"name":"Shuangcheng Jiao"},{"affiliations":["Arizona State University, Tempe, United States"],"email":"cbryan16@asu.edu","is_corresponding":false,"name":"Chris Bryan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xumeng Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1438","time_end":"","time_stamp":"","time_start":"","title":"Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy","uid":"v-full-1438","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We are currently witnessing an increase in web-based, data-driven initiatives that explain complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. Many of these projects call themselves \"atlases\", a term that historically referred to collections of maps or scientific illustrations. To answer the question of what makes a \"visualization atlas\", we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of visualization atlases as an emerging format to present complex topics in a holistic, data-driven, and curated way through visualization, (2) a set of design patterns and design dimensions that led to (3) defining 5 visualization atlas genres, and (4) insights into the atlas creation from interviews. We found that visualization atlases are unique in that they combine exploratory visualization with narrative elements from data-driven storytelling and structured navigation mechanisms. They can act as a reference, communication or discovery tools targeting a wide range of audiences with different levels of domain knowledge. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases.","accessible_pdf":false,"authors":[{"affiliations":["The University of Edinburgh, Edinburgh, United Kingdom"],"email":"jinrui.w@outlook.com","is_corresponding":true,"name":"Jinrui Wang"},{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":["Inria, Bordeaux, France","University of Edinburgh, Edinburgh, United Kingdom"],"email":"bbach@inf.ed.ac.uk","is_corresponding":false,"name":"Benjamin Bach"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"uhinrich@ed.ac.uk","is_corresponding":false,"name":"Uta Hinrichs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jinrui Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1446","time_end":"","time_stamp":"","time_start":"","title":"Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration","uid":"v-full-1446","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present a systematic review, an empirical study, and a first set of considerations for designing visualizations in motion, derived from a concrete scenario in which these visualizations were used to support a primary task. In practice, when viewers are confronted with embedded visualizations, they often have to focus on a primary task and can only quickly glance at a visualization showing rich, often dynamically updated, information. As such, the visualizations must be designed so as not to distract from the primary task, while at the same time being readable and useful for aiding the primary task. For example, in games, players who are engaged in a battle have to look at their enemies but also read the remaining health of their own game character from the health bar over their character's head. Many trade-offs are possible in the design of embedded visualizations in such dynamic scenarios, which we explore in-depth in this paper with a focus on user experience. We use video games as an example of an application context with a rich existing set of visualizations in motion. We begin our work with a systematic review of in-game visualizations in motion. Next, we conduct an empirical user study to investigate how different embedded visualizations in motion designs impact user experience. We conclude with a set of considerations and trade-offs for designing visualizations in motion more broadly as derived from what we learned about video games. All supplemental materials of this paper are available at osf.io/3v8wm/.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"yaolijie0219@gmail.com","is_corresponding":true,"name":"Lijie Yao"},{"affiliations":["Univerisit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"federicabucchieri@gmail.com","is_corresponding":false,"name":"Federica Bucchieri"},{"affiliations":["Carleton University, Ottawa, Canada"],"email":"dieselfish@gmail.com","is_corresponding":false,"name":"Victoria McArthur"},{"affiliations":["LISN, Universit\u00e9 Paris-Saclay, CNRS, INRIA, Orsay, France"],"email":"anastasia.bezerianos@universite-paris-saclay.fr","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lijie Yao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1451","time_end":"","time_stamp":"","time_start":"","title":"User Experience of Visualizations in Motion: A Case Study and Design Considerations","uid":"v-full-1451","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents a practical approach for the optimization of topological simplification, a central pre-processing step for the analysis and visualization of scalar data. Given an input scalar field f and a set of \u201csignal\u201d persistence pairs to maintain, our approaches produces an output field g that is close to f and which optimizes (i) the cancellation of \u201cnon-signal\u201d pairs, while (ii) preserving the \u201csignal\u201d pairs. In contrast to pre-existing simplification approaches, our method is not restricted to persistence pairs involving extrema and can thus address a larger class of topological features, in particular saddle pairs in three-dimensional scalar data. Our approach leverages recent generic persistence optimization frameworks and extends them with tailored accelerations specific to the problem of topological simplification. Extensive experiments report substantial accelerations over these frameworks, thereby making topological simplification optimization practical for real-life datasets. Our work enables a direct visualization and analysis of the topologically simplified data, e.g., via isosurfaces of simplified topology (fewer components and handles). We apply our approach to the extraction of prominent filament structures in three-dimensional data. Specifically, we show that our pre-simplification of the data leads to practical improvements over standard topological techniques for removing filament loops. We also show how our framework can be used to repair genus defects in surface processing. Finally, we provide a C++ implementation for reproducibility purposes.","accessible_pdf":false,"authors":[{"affiliations":["CNRS, Paris, France","SORBONNE UNIVERSITE, Paris, France"],"email":"mohamed.kissi@lip6.fr","is_corresponding":true,"name":"Mohamed KISSI"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"mathieu.pont@lip6.fr","is_corresponding":false,"name":"Mathieu Pont"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"josh@cs.arizona.edu","is_corresponding":false,"name":"Joshua A Levine"},{"affiliations":["CNRS, Paris, France","Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mohamed KISSI"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1461","time_end":"","time_stamp":"","time_start":"","title":"A Practical Solver for Scalar Data Topological Simplification","uid":"v-full-1461","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, an approach for extracting and modeling visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines---DracoGPT-Rank and DracoGPT-Recommend---to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT models the preferences expressed by LLMs well, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantively diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and serve as a reliable and cost-effective stand-in for LLMs.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"wwill@cs.washington.edu","is_corresponding":true,"name":"Huichen Will Wang"},{"affiliations":["University of Washington, Seattle, United States"],"email":"mgord@cs.stanford.edu","is_corresponding":false,"name":"Mitchell L. Gordon"},{"affiliations":["University of Washington, Seattle, United States"],"email":"leibatt@cs.washington.edu","is_corresponding":false,"name":"Leilani Battle"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Huichen Will Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1472","time_end":"","time_stamp":"","time_start":"","title":"DracoGPT: Extracting Visualization Design Preferences from Large Language Models","uid":"v-full-1472","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Recent advancements in Large Language Models (LLMs) and Prompt Engineering have made chatbot customization more accessible, significantly reducing barriers to tasks that previously required programming skills. However, prompt evaluation, especially at the dataset scale, remains complex due to the need to assess prompts across thousands of test instances within a dataset. Our study, based on a comprehensive literature review and pilot study, summarized five critical challenges in prompt evaluation. In response, we introduce a feature-oriented workflow for systematic prompt evaluation, focusing on text summarization. Our workflow advocates feature metrics such as complexity, formality, or naturalness, instead of using traditional quality metrics like ROUGE. This design choice enables a more user-friendly evaluation of prompts, as it guides users in sorting through the ambiguity inherent in natural language. To support this workflow, we introduce Awesum, a visual analytics system that facilitates identifying optimal prompt refinements through interactive visualizations, featuring a novel Prompt Comparator design that employs a BubbleSet-inspired design enhanced by dimensionality reduction techniques. We evaluate the effectiveness and general applicability of the system with practitioners from various domains and found that (1) our design helps overcome the learning curve for non-technical people to conduct a systematic evaluation, and (2) our feature-oriented workflow has the potential to generalize to other NLG and image-generation tasks. For future works, we advocate moving towards feature-oriented evaluation of LLM prompts and discuss unsolved challenges in terms of human-agent interaction.","accessible_pdf":false,"authors":[{"affiliations":["University of California Davis, Davis, United States"],"email":"ytlee@ucdavis.edu","is_corresponding":true,"name":"Sam Yu-Te Lee"},{"affiliations":["University of California, Davis, Davis, United States"],"email":"abahukhandi@ucdavis.edu","is_corresponding":false,"name":"Aryaman Bahukhandi"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"dyuliu@ucdavis.edu","is_corresponding":false,"name":"Dongyu Liu"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"ma@cs.ucdavis.edu","is_corresponding":false,"name":"Kwan-Liu Ma"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sam Yu-Te Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1474","time_end":"","time_stamp":"","time_start":"","title":"Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts","uid":"v-full-1474","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We propose the notion of Attention-aware Visualizations (AAVs) that track the user's perception of a visual representation over time and feed this information back to the visualization.This idea is particularly useful for ubiquitous and immersive analytics where knowing which embedded visualizations the user is looking at can be used to make visualizations react appropriately to the user's attention: for example, by highlighting data the user has not yet seen. We can separate the approach into three components: (1) measuring the user's gaze on a visualization and its parts; (2) tracking the user's attention over time; and (3) reactively modifying the visual representation based on the current attention metric. In this paper, we present two separate implementations of AAV: a 2D numeric integration of attention for web-based visualizations that can use an embodied eye-tracker to capture the user's gaze, and a 3D implementation that uses the stencil buffer to track the visibility of each individual mark in a visualization. Both methods provide similar mechanisms for accumulating attention over time and changing the appearance of marks in response. We also present results from a controlled laboratory experiment studying different visual feedback mechanisms for attention.","accessible_pdf":false,"authors":[{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"arvind@cs.au.dk","is_corresponding":true,"name":"Arvind Srinivasan"},{"affiliations":["Aarhus University, Aarhus N, Denmark"],"email":"johannes@ellemose.eu","is_corresponding":false,"name":"Johannes Ellemose"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.butcher@bangor.ac.uk","is_corresponding":false,"name":"Peter W. S. Butcher"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.ritsos@bangor.ac.uk","is_corresponding":false,"name":"Panagiotis D. Ritsos"},{"affiliations":["Aarhus University, Aarhus, Denmark"],"email":"elm@cs.au.dk","is_corresponding":false,"name":"Niklas Elmqvist"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arvind Srinivasan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1480","time_end":"","time_stamp":"","time_start":"","title":"Attention-Aware Visualization: Tracking and Responding to User Perception Over Time","uid":"v-full-1480","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Egocentric networks, often visualized as node-link diagrams, portray the complex relationship (link) dynamics between an entity (node) and others. However, common analytics tasks are multifaceted, encompassing interactions among four key aspects: strength, function, structure, and content. Current node-link visualization designs may fall short, focusing narrowly on certain aspects and neglecting the holistic, dynamic nature of egocentric networks. To bridge this gap, we introduce SpreadLine, a novel visualization framework designed to enable the visual exploration of egocentric networks from these four aspects at the microscopic level. Leveraging the intuitive appeal of storyline visualizations, SpreadLine adopts a storyline-based design to represent entities and their evolving relationships. We further encode essential topological information in the layout and condense the contextual information in a metro map metaphor, allowing for a more engaging and effective way to explore temporal and attribute-based information. To guide our work, with a thorough review of pertinent literature, we have distilled a task taxonomy that addresses the analytical needs specific to egocentric network exploration. Acknowledging the diverse analytical requirements of users, SpreadLine offers customizable encodings to enable users to tailor the framework for their tasks. We demonstrate the efficacy and general applicability of SpreadLine through three diverse real-world case studies and a usability study.","accessible_pdf":false,"authors":[{"affiliations":["University of California, Davis, Davis, United States"],"email":"yskuo@ucdavis.edu","is_corresponding":true,"name":"Yun-Hsin Kuo"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"dyuliu@ucdavis.edu","is_corresponding":false,"name":"Dongyu Liu"},{"affiliations":["University of California at Davis, Davis, United States"],"email":"ma@cs.ucdavis.edu","is_corresponding":false,"name":"Kwan-Liu Ma"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yun-Hsin Kuo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1483","time_end":"","time_stamp":"","time_start":"","title":"SpreadLine: Visualizing Egocentric Dynamic Influence","uid":"v-full-1483","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Referential gestures, or as termed in linguistics, {\\em deixis}, are an essential part of communication around data visualizations. Despite their importance, such gestures are often overlooked when documenting data analysis meetings. Transcripts, for instance, fail to capture gestures, and video recordings may not adequately capture or emphasize them. We introduce a novel method for documenting collaborative data meetings that treats deixis as a first-class citizen. Our proposed framework captures cursor-based gestural data along with audio and converts them into interactive documents. The framework leverages a large language model to identify word correspondences with gestures. These identified references are used to create context-based annotations in the resulting interactive document. We assess the effectiveness of our proposed method through a user study, finding that participants preferred our automated interactive documentation over recordings, transcripts, and manual note-taking. Furthermore, we derive a preliminary taxonomy of cursor-based deictic gestures from participant actions during the study. This taxonomy offers further opportunities for better utilizing cursor-based deixis in collaborative data analysis scenarios.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"hatch.on27@gmail.com","is_corresponding":true,"name":"Chang Han"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chang Han"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1487","time_end":"","time_stamp":"","time_start":"","title":"A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations","uid":"v-full-1487","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A year ago, we submitted an IEEE VIS paper entitled \u201cSwaying the Public? Impacts of Election Forecast Visualizations on Emotion, Trust, and Intention in the 2022 U.S. Midterms\u201d [68], which was later bestowed with the honor of a best paper award. Yet, studying such a complex phenomenon required us to explore many more design paths than we could count, and certainly more than we could document in a single paper. This paper, then, is the unwritten prequel\u2014the backstory. It chronicles our journey from a simple idea\u2014to study visualizations for election forecasts\u2014through obstacles such as developing meaningfully different, easy-to-understand forecast visualizations, crafting professional-looking forecasts, and grappling with how to study perceptions of the forecasts before, during, and after the 2022 U.S. midterm elections. Our backstory began with developing a design space for two-party election forecasts, de\ufb01ning dimensions such as data transformations, visual channels, layouts, and types of animated narratives. We then qualitatively evaluated ten representative prototypes in this design space through interviews with 13 participants. The interviews yielded invaluable insights into how people interpret uncertainty visualizations and reason about probability in a U.S. election context, such as confounding win probability with vote share and erroneously forming connections between concrete visual representations (like dots) and real-world entities (like votes). Informed by these insights, we revised our prototypes to address ambiguity in interpreting visual encodings, particularly through the inclusion of extensive annotations. As we navigated these design paths, we contributed a design space and insights that may help others when designing uncertainty visualizations. We also hope that our design lessons and research process can inspire the research community when exploring topics related to designing visualizations for the general public.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"fumeng.p.yang@gmail.com","is_corresponding":true,"name":"Fumeng Yang"},{"affiliations":["Northwestern University, Evanston, United States","Northwestern University, Evanston, United States"],"email":"mandicai2028@u.northwestern.edu","is_corresponding":false,"name":"Mandi Cai"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"chloemortenson2026@u.northwestern.edu","is_corresponding":false,"name":"Chloe Rose Mortenson"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"hoda@u.northwestern.edu","is_corresponding":false,"name":"Hoda Fakhari"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"aysedlokmanoglu@gmail.com","is_corresponding":false,"name":"Ayse Deniz Lokmanoglu"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"nicholas.diakopoulos@gmail.com","is_corresponding":false,"name":"Nicholas Diakopoulos"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"erik.nisbet@northwestern.edu","is_corresponding":false,"name":"Erik Nisbet"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fumeng Yang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1488","time_end":"","time_stamp":"","time_start":"","title":"The Backstory to \u201cSwaying the Public\u201d: A Design Chronicle of Election Forecast Visualizations","uid":"v-full-1488","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Projecting high-dimensional vectors into two dimensions for visualization, known as embedding visualization, facilitates perceptual reasoning and interpretation. Comparison of multiple embedding visualizations drives decision-making in many domains, but conventional comparison methods are limited by a reliance on direct point correspondences. This requirement precludes embedding comparisons without point correspondences, such as two different datasets of annotated images, and fails to capture meaningful higher-level relationships among point groups. To address these shortcomings, we propose a general framework to compare embedding visualizations based on shared class labels rather than individual points. Our approach partitions points into regions corresponding to three key class concepts---confusion, neighborhood, and relative size---to characterize intra- and inter-class relationships. Informed by a preliminary user study, we realize an implementation of our framework using perceptual neighborhood graphs to define these regions and introduce metrics to quantify each concept. We demonstrate the generality of our framework with use cases from machine learning and single-cell biology, highlighting our metrics' ability to surface insightful comparisons across label hierarchies. To assess the effectiveness of our approach, we conducted a user study with five machine learning researchers and six single-cell biologists using an interactive and scalable prototype developed in Python and Rust. Our metrics enable more structured comparison through visual guidance and increased participants\u2019 confidence in their findings.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"trevor_manz@g.harvard.edu","is_corresponding":true,"name":"Trevor Manz"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"f.lekschas@gmail.com","is_corresponding":false,"name":"Fritz Lekschas"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"palmergreene@gmail.com","is_corresponding":false,"name":"Evan Greene"},{"affiliations":["Ozette Technologies, Seattle, United States"],"email":"greg@ozette.com","is_corresponding":false,"name":"Greg Finak"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Trevor Manz"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1489","time_end":"","time_stamp":"","time_start":"","title":"A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies","uid":"v-full-1489","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Topological abstractions offer a method to summarize the behavior of vector fields, but computing them robustly can be challenging due to numerical precision issues. One alternative is to represent the vector field using a discrete approach, which constructs a collection of pairs of simplices in the input mesh that satisfies criteria introduced by Forman\u2019s discrete Morse theory. While numerous approaches exist to compute pairs in the restricted case of the gradient of a scalar field, state-of-the-art algorithms for the general case of vector fields require expensive optimization procedures. This paper introduces a fast, novel approach for pairing simplices of two-dimensional, triangulated vector fields that do not vary in time. The key insight of our approach is that we can employ a local evaluation, inspired by the approach used to construct a discrete gradient field, where every cell in a mesh is considered by no more than one of its vertices. Specifically, we observe that for any edge in the input mesh, we can uniquely assign an outward direction of flow. We can further expand this consistent notion of outward flow at each vertex, which corresponds to the concept of a downhill flow in the case of scalar fields. Working with outward flow enables a linear-time algorithm that processes the (outward) neighborhoods of each vertex one-by-one, similar to the approach used for scalar fields. We couple our approach to constructing discrete vector fields with a method to extract, simplify, and visualize topological features. Empirical results on analytic and simulation data demonstrate drastic improvements in running time, produce features similar to the current state-of-the-art, and show the application of simplification to large, complex flows.","accessible_pdf":false,"authors":[{"affiliations":["University of Arizona, Tucson, United States"],"email":"finkent@arizona.edu","is_corresponding":true,"name":"Tanner Finken"},{"affiliations":["Sorbonne Universit\u00e9, Paris, France"],"email":"julien.tierny@sorbonne-universite.fr","is_corresponding":false,"name":"Julien Tierny"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"josh@cs.arizona.edu","is_corresponding":false,"name":"Joshua A Levine"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tanner Finken"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1494","time_end":"","time_stamp":"","time_start":"","title":"Localized Evaluation for Constructing Discrete Vector Fields","uid":"v-full-1494","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Haptic feedback provides an essential sensory stimulus crucial for interacting and analyzing three-dimensional spatio-temporal phenomena on surface visualizations. Given its ability to provide enhanced spatial perception and scene maneuverability, virtual reality (VR) catalyzes haptic interactions on surface visualizations. Various interaction modes, encompassing both mid-air and on-surface interactions---with or without the application of assisting force stimuli---have been explored using haptic force feedback devices. In this paper, we evaluate the use of on-surface and assisted on-surface haptic modes of interaction compared to a no-haptic interaction mode. A force-based haptic stylus is used for all three modalities; the on-surface mode uses collision based forces, whereas the assisted on-surface mode is accompanied by an additional snapping force. We conducted a within-subjects user study involving fundamental interaction tasks performed on surface visualizations. Keeping a consistent visual design across all three modes, our study incorporates tasks that require the localization of the highest, lowest, and random points on surfaces; and tasks that focus on brushing curves on surfaces with varying complexity and occlusion levels. Our findings show that participants took almost the same time to brush curves using all the interaction modes. They could draw smoother curves using the on-surface interaction modes compared to the no-haptic mode. However, the assisted on-surface mode provided better accuracy than the on-surface mode. The on-surface mode was slower in point localization, but the accuracy depended on the visual cues and occlusions associated with the tasks. Finally, we discuss participant feedback on using haptic force feedback as a tangible input modality and share takeaways to aid the design of haptics-based tangible interactions for surface visualizations.","accessible_pdf":false,"authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"hamza.afzaal@ucalgary.ca","is_corresponding":true,"name":"Hamza Afzaal"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"ualim@ucalgary.ca","is_corresponding":false,"name":"Usman Alim"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hamza Afzaal"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1500","time_end":"","time_stamp":"","time_start":"","title":"Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations","uid":"v-full-1500","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization is widely used for exploring personal data, but many visualization authoring systems do not support expressing data in flexible, personal, and organic layouts. Sketching is an accessible tool for experimenting with visualization designs, but formalizing sketched elements into structured data representations is difficult, as modifying hand-drawn glyphs to encode data when available is labour-intensive and error prone. We propose an approach where authors structure their own expressive templates, capturing implicit style as well as explicit data mappings, through sketching a representative visualization for an envisioned or partial dataset. Our approach seeks to support freeform exploration and partial specification, balanced against interactive machine support for specifying the generative procedural rules. We implement this approach in DataGarden, a system designed to support hierarchical data visualizations, and evaluate it with 12 participants in a reproduction study and four experts in a freeform creative task. Participants readily picked up the core idea of template authoring, and the variety of workflows we observed highlight how this process serves design and data ideation as well as visual constraint iteration. We discuss challenges in implementing the design considerations underpinning DataGarden, and illustrate its potential in a gallery of visualizations generated from authored templates.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, Orsay, France"],"email":"anna.offenwanger@gmail.com","is_corresponding":true,"name":"Anna Offenwanger"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Inria, LISN, Orsay, France"],"email":"theophanis.tsandilas@inria.fr","is_corresponding":false,"name":"Theophanis Tsandilas"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":false,"name":"Fanny Chevalier"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anna Offenwanger"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1502","time_end":"","time_stamp":"","time_start":"","title":"DataGarden: Formalizing Personal Sketches into Structured Visualization Templates","uid":"v-full-1502","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The increasing reliance on Large Language Models (LLMs) for health information seeking can pose severe risks due to the potential for misinformation and the complexity of these topics. This paper introduces KnowNet, a visualization system that integrates LLMs with Knowledge Graphs (KG) to provide enhanced accuracy and structured exploration. One core idea in KnowNet is to conceptualize the understanding of a subject as the gradual construction of graph visualization, aligning the user's cognitive process with both the structured data in KGs and the unstructured outputs from LLMs. Specifically, we extracted triples (e.g., entities and their relations) from LLM outputs and mapped them into the validated information and supported evidence in external KGs. Based on the neighborhood of the currently explored entities in KGs, KnowNet provides recommendations for further inquiry, aiming to guide a comprehensive understanding without overlooking critical aspects. A progressive graph visualization is proposed to show the alignment between LLMs and KGs, track previous inquiries, and connect this history with current queries and next-step recommendations. We demonstrate the effectiveness of our system via use cases and expert interviews.","accessible_pdf":false,"authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"yan00111@umn.edu","is_corresponding":false,"name":"Youfu Yan"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"hou00127@umn.edu","is_corresponding":false,"name":"Yu Hou"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"xiao0290@umn.edu","is_corresponding":false,"name":"Yongkang Xiao"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"zhan1386@umn.edu","is_corresponding":false,"name":"Rui Zhang"},{"affiliations":["University of Minnesota, Minneapolis , United States"],"email":"qianwen@umn.edu","is_corresponding":true,"name":"Qianwen Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qianwen Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1503","time_end":"","time_stamp":"","time_start":"","title":"Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration","uid":"v-full-1503","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A wide range of visualization authoring interfaces enable the creation of highly customized visualizations. However, prioritizing expressiveness often impedes the learnability of the authoring interface. The diversity of users, such as varying computational skills and prior experiences in user interfaces, makes it even more challenging for a single authoring interface to satisfy the needs of a broad audience. In this paper, we introduce a framework to balance learnability and expressivity in a visualization authoring system. Adopting insights from learnability studies, such as multimodal interaction and visualization literacy, we explore the design space of blending multiple visualization authoring interfaces for supporting authoring tasks in a complementary and flexible manner. To evaluate the effectiveness of blending interfaces, we implemented a proof-of-concept system, Blace, that combines four common visualization authoring interfaces\u2014template-based, shelf configuration, natural language, and code editor\u2014that are tightly linked to one another to help users easily relate unfamiliar interfaces to more familiar ones. Using the system, we conducted a user study with 12 domain experts who regularly visualize genomics data as part of their analysis workflow. Participants with varied visualization and programming backgrounds were able to successfully reproduce complex visualization examples without a guided tutorial in the study. Feedback from a post-study qualitative questionnaire further suggests that blending interfaces enabled participants to learn the system easily and assisted them in confidently editing unfamiliar visualization grammar in the code editor, enabling expressive customization. Reflecting on our study results and the design of our system, we discuss the different interaction patterns that we identified and design implications for blending visualization authoring interfaces.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"sehi_lyi@hms.harvard.edu","is_corresponding":true,"name":"Sehi L'Yi"},{"affiliations":["Eindhoven University of Technology, Eindhoven, Netherlands"],"email":"a.v.d.brandt@tue.nl","is_corresponding":false,"name":"Astrid van den Brandt"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"etowah_adams@hms.harvard.edu","is_corresponding":false,"name":"Etowah Adams"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"huyen_nguyen@hms.harvard.edu","is_corresponding":false,"name":"Huyen N. Nguyen"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sehi L'Yi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1504","time_end":"","time_stamp":"","time_start":"","title":"Learnable and Expressive Visualization Authoring Through Blended Interfaces","uid":"v-full-1504","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Despite the recent surge of research efforts to make data visualizations accessible to people who are blind or have low-vision (BLV), how to support BLV people's data analysis remains an important and challenging question. As refreshable tactile displays (RTDs) become cheaper and conversational agents continue to improve, their combination provides a promising approach to support BLV people's interactive data analysis. To understand how BLV people would use and react to a system combining an RTD with a conversational agent, we conducted a Wizard-of-Oz study with 11 BLV participants involving line graphs, bar charts, and isarithmic maps. From an analysis of participant interactions, we identified nine distinct patterns and learned that the choice of modalities depended on the type of task and prior experience with tactile graphics. We also found that participants strongly preferred the combination of RTD and speech to a single modality, and that participants with more tactile experience described how tactile images facilitated deeper engagement with the data and supported independent interpretation. Our findings will inform the design of interfaces for such interactive mixed-modality systems.","accessible_pdf":false,"authors":[{"affiliations":["Monash University, Melbourne, Australia"],"email":"samuel.reinders@monash.edu","is_corresponding":true,"name":"Samuel Reinders"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"matthew.butler@monash.edu","is_corresponding":false,"name":"Matthew Butler"},{"affiliations":["Monash University, Clayton, Australia"],"email":"ingrid.zukerman@monash.edu","is_corresponding":false,"name":"Ingrid Zukerman"},{"affiliations":["Yonsei University, Seoul, Korea, Republic of","Microsoft Research, Redmond, United States"],"email":"b.lee@yonsei.ac.kr","is_corresponding":false,"name":"Bongshin Lee"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"lizhen.qu@monash.edu","is_corresponding":false,"name":"Lizhen Qu"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"kim.marriott@monash.edu","is_corresponding":false,"name":"Kim Marriott"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Samuel Reinders"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1522","time_end":"","time_stamp":"","time_start":"","title":"When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech","uid":"v-full-1522","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We introduce DiffFit, a differentiable algorithm for fitting protein atomistic structures into experimental reconstructed Cryo-Electron Microscopy (cryo-EM) volume map. This process is essential in structural biology to semi-automatically reconstruct large meso-scale models of complex protein assemblies and complete cellular structures that are based on measured cryo-EM data. Current approaches require manual fitting in 3D that already results in approximately aligned structures followed by an automated fine-tuning of the alignment. With our DiffFit approach, we enable domain scientists to automatically fit new structures and visualize the fitting results for inspection and interactive revision. Our fitting begins with differentiable 3D rigid transformations of the protein atom coordinates, followed by sampling the density values at its atom coordinates from the target cryo-EM volume. To ensure a meaningful correlation between the sampled densities and the protein structure, we propose a novel loss function based on a multi-resolution volume-array approach and the exploitation of the negative space. Such loss function serves as a critical metric for assessing the fitting quality, ensuring both fitting accuracy and improved visualization of the results. We assessed the placement quality of DiffFit with several large, realistic datasets and found its quality to be superior to that of previous methods. We further evaluated our method in two use cases. First, we demonstrate its use in the process of automating the integration of known composite structures into larger protein complexes. Second, we show that it facilitates the fitting of predicted protein domains into volume densities to aid researchers in the identification of unknown proteins. We implemented our algorithm as an open-source plugin (github.com/nanovis/DiffFitViewer) in ChimeraX, a leading visualization software in the field. All supplemental materials are available at osf.io/5tx4q.","accessible_pdf":false,"authors":[{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"deng.luo@kaust.edu.sa","is_corresponding":true,"name":"Deng Luo"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"zainab.alsuwaykit@kaust.edu.sa","is_corresponding":false,"name":"Zainab Alsuwaykit"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"dawar.khan@kaust.edu.sa","is_corresponding":false,"name":"Dawar Khan"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"ondrej.strnad@kaust.edu.sa","is_corresponding":false,"name":"Ond\u0159ej Strnad"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"ivan.viola@kaust.edu.sa","is_corresponding":false,"name":"Ivan Viola"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Deng Luo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1533","time_end":"","time_stamp":"","time_start":"","title":"DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map","uid":"v-full-1533","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Large Language Models (LLMs) have been successfully adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways from visualizations? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as the spatial arrangement. In this work, we examine how well LLMs can predict such design choice sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We test four common chart arrangements: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked, through three experimental phases. In Phase I, we identified the optimal configuration of LLMs to generate meaningful chart takeaways, across three LLM models (GPT3.5, GPT4, GPT4V, and Gemini 1.0 Pro), two temperature settings (0, 0.7), four chart specifications (Vega-Lite, Matplotlib, ggplot2, and scene graphs), and several prompting strategies. We found that even state-of-the-art LLMs can struggle to generate factually accurate takeaways. In Phase 2, using the most optimal LLM configuration, we generated 30 chart takeaways across the four arrangements of bar charts using two datasets, with both zero-shot and one-shot settings. Compared to data on human takeaways from prior work, we found that the takeaways LLMs generate often do not align with human comparisons. In Phase 3, we examined the effect of the charts\u2019 underlying data values on takeaway alignment between humans and LLMs, and found both matches and mismatches. Overall, our work evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human-aligned chart takeaways.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"wwill@cs.washington.edu","is_corresponding":true,"name":"Huichen Will Wang"},{"affiliations":["Adobe Research, Seattle, United States"],"email":"jhoffs@adobe.com","is_corresponding":false,"name":"Jane Hoffswell"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"yukithane@gmail.com","is_corresponding":false,"name":"Sao Myat Thazin Thane"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"victorbursztyn2022@u.northwestern.edu","is_corresponding":false,"name":"Victor S. Bursztyn"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"cxiong@gatech.edu","is_corresponding":false,"name":"Cindy Xiong Bearfield"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Huichen Will Wang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1544","time_end":"","time_stamp":"","time_start":"","title":"How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts","uid":"v-full-1544","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visual validation of regression models in scatterplots is a common practice for assessing model quality, yet its efficacy remains unquantified. We conducted two empirical experiments to investigate individuals' ability to visually validate linear regression models (linear trends) and to examine the impact of common visualization designs on validation quality. The first experiment showed that the level of accuracy for visual estimation of slope (i.e., fitting a line to data) is higher than for visual validation of slope (i.e., accepting a shown line). Notably, we found bias toward slopes that are ''too steep'' in both cases. This lead to novel insights that participants naturally assessed regression with orthogonal distances between the points and the line (i.e., ODR regression) rather than the common vertical distances (OLS regression). In the second experiment, we investigated whether incorporating common designs for regression visualization (error lines, bounding boxes, and confidence intervals) would improve visual validation. Even though error lines reduced validation bias, results failed to show the desired improvements in accuracy for any design. Overall, our findings suggest caution in using visual model validation for linear trends in scatterplots.","accessible_pdf":false,"authors":[{"affiliations":["University of Cologne, Cologne, Germany"],"email":"braun@cs.uni-koeln.de","is_corresponding":true,"name":"Daniel Braun"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"},{"affiliations":["University of Wisconsin - Madison, Madison, United States"],"email":"gleicher@cs.wisc.edu","is_corresponding":false,"name":"Michael Gleicher"},{"affiliations":["University of Cologne, Cologne, Germany"],"email":"landesberger@cs.uni-koeln.de","is_corresponding":false,"name":"Tatiana von Landesberger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Braun"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1547","time_end":"","time_stamp":"","time_start":"","title":"Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots","uid":"v-full-1547","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns in dimensionality reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections.","accessible_pdf":false,"authors":[{"affiliations":["Tufts University, Medford, United States"],"email":"brianmontambault@gmail.com","is_corresponding":true,"name":"Brian Montambault"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["Tufts University, Boston, United States"],"email":"jen@cs.tufts.edu","is_corresponding":false,"name":"Jen Rogers"},{"affiliations":["Tufts University, Medford, United States"],"email":"camelia_daniela.brumar@tufts.edu","is_corresponding":false,"name":"Camelia D. Brumar"},{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"mingwei.li@tufts.edu","is_corresponding":false,"name":"Mingwei Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Brian Montambault"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1568","time_end":"","time_stamp":"","time_start":"","title":"DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic","uid":"v-full-1568","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Effective security patrol management is critical for ensuring safety in diverse environments such as art galleries, airports, and factories. The behavior of patrols in these situations can be modeled by patrolling games. They simulate the behavior of the patrol and adversary in the building, which is modeled as a graph of interconnected nodes representing rooms. The designers of algorithms solving the game face the problem of analyzing complex graph layouts with temporal dependencies. Therefore, appropriate visual support is crucial for them to work effectively. In this paper, we present a novel tool that helps the designers of patrolling games explore the outcomes of the proposed algorithms and approaches, evaluate their success rate, and propose modifications that can improve their solutions. Our tool offers an intuitive and interactive interface, featuring a detailed exploration of patrol routes and probabilities of taking them, simulation of patrols, and other requested features. In close collaboration with experts in designing patrolling games, we conducted three case studies demonstrating the usage and usefulness of our tool. The prototype of the tool, along with exemplary datasets, is available at https://gitlab.fi.muni.cz/formela/strategy-vizualizer.","accessible_pdf":false,"authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"langm@mail.muni.cz","is_corresponding":true,"name":"Mat\u011bj Lang"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"469242@mail.muni.cz","is_corresponding":false,"name":"Adam \u0160t\u011bp\u00e1nek"},{"affiliations":["Faculty of Informatics, Masaryk University, Brno, Czech Republic"],"email":"514179@mail.muni.cz","is_corresponding":false,"name":"R\u00f3bert Zvara"},{"affiliations":["Faculty of Informatics, Masaryk University, Brno, Czech Republic"],"email":"rehak@fi.muni.cz","is_corresponding":false,"name":"Vojt\u011bch \u0158eh\u00e1k"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mat\u011bj Lang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1571","time_end":"","time_stamp":"","time_start":"","title":"Who Let the Guards Out: Visual Support for Patrolling Games","uid":"v-full-1571","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The numerical extraction of vortex cores from time-dependent fluid flow attracted much attention over the past decades. A commonly agreed upon vortex definition remained elusive since a proper vortex core needs to satisfy two hard constraints: it must be objective and Lagrangian. Recent methods on objectivization met the first but not the second constraint, since there was no formal guarantee that the resulting vortex coreline is indeed a pathline of the fluid flow. In this paper, we propose the first vortex core definition that is both objective and Lagrangian. Our approach restricts observer motions to follow along pathlines, which reduces the degrees of freedoms: we only need to optimize for an observer rotation that makes the observed flow as steady as possible. This optimization succeeds along Lagrangian vortex corelines and will result in a non-zero time-partial everywhere else. By performing this optimization at each point of a spatial grid, we obtain a residual scalar field, which we call vortex deviation error. The local minima on the grid serve as seed points for a gradient descent optimization that delivers sub-voxel accurate corelines. The visualization of both 2D and 3D vortex cores is based on the separation of the movement of the vortex core and the swirling flow behavior around it. While the vortex core is represented by a pathline, the swirling motion around it is visualized by streamlines in the correct frame. We demonstrate the utility of the approach on several 2D and 3D time-dependent vector fields.","accessible_pdf":false,"authors":[{"affiliations":["Friedrich-Alexander-University Erlangen-N\u00fcrnberg, Erlangen, Germany"],"email":"tobias.guenther@fau.de","is_corresponding":true,"name":"Tobias G\u00fcnther"},{"affiliations":["University of Magdeburg, Magdeburg, Germany"],"email":"theisel@ovgu.de","is_corresponding":false,"name":"Holger Theisel"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias G\u00fcnther"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1574","time_end":"","time_stamp":"","time_start":"","title":"Objective Lagrangian Vortex Cores and their Visual Representations","uid":"v-full-1574","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The visualization community has a rich history of reflecting upon visualization design flaws. Although research in this area has remained lively, we believe it is essential to continuously revisit this classic and critical topic in visualization research by incorporating more empirical evidence from diverse sources, characterizing new design flaws, building more systematic theoretical frameworks, and understanding the underlying reasons for these flaws. To address the above gaps, this work investigated visualization design flaws through the lens of the public, constructed a framework to summarize and categorize the identified flaws, and explored why these flaws occur. Specifically, we analyzed 2227 flawed data visualizations collected from an online gallery and derived a design task-associated taxonomy containing 76 specific design flaws. These flaws were further classified into three high-level categories (i.e., misinformation, uninformativeness, unsociability) and ten subcategories (e.g., inaccuracy, unfairness, ambiguity). Next, we organized five focus groups to explore why these design flaws occur and identified seven causes of the flaws. Finally, we proposed a research agenda for combating visualization design flaws and summarize nine research opportunities.","accessible_pdf":false,"authors":[{"affiliations":["Fudan University, Shanghai, China","Fudan University, Shanghai, China"],"email":"xingyulan96@gmail.com","is_corresponding":true,"name":"Xingyu Lan"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom","University of Edinburgh, Edinburgh, United Kingdom"],"email":"coraline.liu.dataviz@gmail.com","is_corresponding":false,"name":"Yu Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xingyu Lan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1594","time_end":"","time_stamp":"","time_start":"","title":"I Came Across a Junk: Understanding Design Flaws of Data Visualization from the Public's Perspective","uid":"v-full-1594","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Assigning discriminable and harmonic colors to samples according to their class labels and spatial distribution can generate attractive visualizations and facilitate data exploration. However, as the number of classes increases, it is challenging to generate a high-quality color assignment result that accommodates all classes simultaneously. A practical solution is to organize classes into a hierarchy and then dynamically assign colors during exploration. However, existing color assignment methods fall short in generating high-quality color assignment results and dynamically aligning them with hierarchical structures. To address this issue, we develop a dynamic color assignment method for hierarchical data, which is formulated as a multi-objective optimization problem. This method simultaneously considers color discriminability, color harmony, and spatial distribution at each hierarchical level. By using the colors of parent classes to guide the color assignment of their child classes, our method further promotes both consistency and clarity across hierarchical levels. We demonstrate the effectiveness of our method in generating dynamic color assignment results with quantitative experiments and a user study.","accessible_pdf":false,"authors":[{"affiliations":["Tsinghua University, Beijing, China"],"email":"jiashu0717c@gmail.com","is_corresponding":true,"name":"Jiashu Chen"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"vicayang496@gmail.com","is_corresponding":false,"name":"Weikai Yang"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"jiazl22@mails.tsinghua.edu.cn","is_corresponding":false,"name":"Zelin Jia"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"tarolancy@gmail.com","is_corresponding":false,"name":"Lanxi Xiao"},{"affiliations":["Tsinghua University, Beijing, China"],"email":"shixia@tsinghua.edu.cn","is_corresponding":false,"name":"Shixia Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jiashu Chen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1595","time_end":"","time_stamp":"","time_start":"","title":"Dynamic Color Assignment for Hierarchical Data","uid":"v-full-1595","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In understanding and redesigning the function of proteins in modern biochemistry, protein engineers are increasingly focusing on exploring regions in proteins called loops. Analyzing various characteristics of these regions helps the experts design the transfer of the desired function from one protein to another. This process is denoted as loop grafting. We designed a set of interactive visualizations that provide experts with visual support through all the loop grafting pipeline steps. The workflow is divided into several phases, reflecting the steps of the pipeline. Each phase is supported by a specific set of abstracted 2D visual representations of proteins and their loops that are interactively linked with the 3D View of proteins. By sequentially passing through the individual phases, the user shapes the list of loops that are potential candidates for loop grafting. Finally, the actual in-silico insertion of the loop candidates from one protein to the other is performed, and the results are visually presented to the user. In this way, the fully computational rational design of proteins and their loops results in newly designed protein structures that can be further assembled and tested through in-vitro experiments. We showcase the contribution of our visual support design on a real case scenario changing the enantiomer selectivity of the engineered enzyme. Moreover, we provide the readers with the experts' feedback.","accessible_pdf":false,"authors":[{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kiraa@mail.muni.cz","is_corresponding":false,"name":"Filip Op\u00e1len\u00fd"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"paloulbrich@gmail.com","is_corresponding":false,"name":"Pavol Ulbrich"},{"affiliations":["Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital, Brno, Czech Republic"],"email":"joan.planas@mail.muni.cz","is_corresponding":false,"name":"Joan Planas-Iglesias"},{"affiliations":["Masaryk University, Brno, Czech Republic","University of Bergen, Bergen, Norway"],"email":"xbyska@fi.muni.cz","is_corresponding":false,"name":"Jan By\u0161ka"},{"affiliations":["Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital, Brno, Czech Republic"],"email":"stourac.jan@gmail.com","is_corresponding":false,"name":"Jan \u0160toura\u010d"},{"affiliations":["Faculty of Science, Masaryk University, Brno, Czech Republic","St. Anne\u2019s University Hospital Brno, Brno, Czech Republic"],"email":"222755@mail.muni.cz","is_corresponding":false,"name":"David Bedn\u00e1\u0159"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"katarina.furmanova@gmail.com","is_corresponding":true,"name":"Katar\u00edna Furmanov\u00e1"},{"affiliations":["Masaryk University, Brno, Czech Republic"],"email":"kozlikova@fi.muni.cz","is_corresponding":false,"name":"Barbora Kozlikova"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Katar\u00edna Furmanov\u00e1"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1597","time_end":"","time_stamp":"","time_start":"","title":"Visual Support for the Loop Grafting Workflow on Proteins","uid":"v-full-1597","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Existing deep learning-based surrogate models facilitate efficient data generation, but fall short in uncertainty quantification, efficient parameter space exploration, and reverse prediction. In our work, we introduce SurroFlow, a novel normalizing flow-based surrogate model, to learn the invertible transformation between simulation parameters and simulation outputs. The model not only allows accurate predictions of simulation outcomes for a given simulation parameter but also supports uncertainty quantification in the data generation process. Additionally, it enables efficient simulation parameter recommendation and exploration. We integrate SurroFlow and a genetic algorithm as the backend of a visual interface to support effective user-guided ensemble simulation exploration and visualization. Our framework significantly reduces the computational costs while enhancing the reliability and exploration capabilities of scientific surrogate models.","accessible_pdf":false,"authors":[{"affiliations":["The Ohio State University, Columbus, United States","The Ohio State University, Columbus, United States"],"email":"shen.1250@osu.edu","is_corresponding":true,"name":"JINGYI SHEN"},{"affiliations":["The Ohio State University, Columbus, United States","The Ohio State University, Columbus, United States"],"email":"duan.418@osu.edu","is_corresponding":false,"name":"Yuhan Duan"},{"affiliations":["The Ohio State University , Columbus , United States","The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["JINGYI SHEN"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1599","time_end":"","time_stamp":"","time_start":"","title":"SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification","uid":"v-full-1599","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Multi-modal embeddings form the foundation for vision-language models, such as CLIP embeddings, the most widely used text-image embeddings. However, these embeddings are hard to interpret and vulnerable to subtle misalignment of cross-modal features, resulting in decreased model performance and diminished generalization. To address this problem, we design ModalChorus, an interactive system for visual probing and alignment of multi-modal embeddings. ModalChorus primarily offers a two-stage process: 1) embedding probing with Modal Fusion Map (MFM), a novel parametric dimensionality reduction method that integrates both metric and nonmetric objectives to enhance modality fusion; and 2) embedding alignment that allows users to interactively articulate intentions for both point-set and set-set alignments. Quantitative and qualitative comparisons for CLIP embeddings with existing dimensionality reduction (e.g., t-SNE and MDS) and data fusion (e.g., data context map) methods demonstrate the advantages of MFM in showcasing cross-modal features over common vision-language datasets. Case studies reveal that ModalChorus can facilitate intuitive discovery of misalignment and efficient re-alignment in scenarios ranging from zero-shot classification to cross-modal retrieval and generation.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"yyebd@connect.ust.hk","is_corresponding":true,"name":"Yilin Ye"},{"affiliations":["The Hong Kong University of Science and Technology(Guangzhou), Guangzhou, China"],"email":"sxiao713@connect.hkust-gz.edu.cn","is_corresponding":false,"name":"Shishi Xiao"},{"affiliations":["the Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China"],"email":"xingchen.zeng@outlook.com","is_corresponding":false,"name":"Xingchen Zeng"},{"affiliations":["The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China","The Hong Kong University of Science and Technology, Hong Kong SAR, China"],"email":"weizeng@hkust-gz.edu.cn","is_corresponding":false,"name":"Wei Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yilin Ye"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1603","time_end":"","time_stamp":"","time_start":"","title":"ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map","uid":"v-full-1603","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"With the increase of graph size, it becomes difficult or even impossible to visualize graph structures clearly within the limited screen space. Consequently, it is crucial to design effective visual representations for large graphs. In this paper, we propose AdaMotif, a novel approach that can capture the essential structure patterns of large graphs and effectively reveal the overall structures via adaptive motif designs. Specifically, our approach involves partitioning a given large graph into multiple subgraphs, then clustering similar subgraphs and extracting similar structural information within each cluster. Subsequently, adaptive motifs representing each cluster are generated and utilized to replace the corresponding subgraphs, leading to a simplified visualization. Our approach aims to preserve as much information from the subgraphs as possible, effectively simplifying graphs while minimizing information loss. Notably, our approach successfully visualizes crucial community information within a large graph. We conduct case studies and a user study using both synthetic and real-world graphs to validate the effectiveness of our proposed approach. The results demonstrate the capability of our approach in simplifying graphs while retaining important structural and community information.","accessible_pdf":false,"authors":[{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"hzhou@szu.edu.cn","is_corresponding":true,"name":"Hong Zhou"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"laipeifeng1111@gmail.com","is_corresponding":false,"name":"Peifeng Lai"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"zhida.sun@connect.ust.hk","is_corresponding":false,"name":"Zhida Sun"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"2310274034@email.szu.edu.cn","is_corresponding":false,"name":"Xiangyuan Chen"},{"affiliations":["Shenzhen University, Shen Zhen, China"],"email":"275621136@qq.com","is_corresponding":false,"name":"Yang Chen"},{"affiliations":["Shenzhen University, Shenzhen, China"],"email":"hswu@szu.edu.cn","is_corresponding":false,"name":"Huisi Wu"},{"affiliations":["Nanyang Technological University, Singapore, Singapore"],"email":"yong-wang@ntu.edu.sg","is_corresponding":false,"name":"Yong WANG"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hong Zhou"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1606","time_end":"","time_stamp":"","time_start":"","title":"AdaMotif: Graph Simplification via Adaptive Motif Design","uid":"v-full-1606","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Partitionings (or segmentations) divide a given domain into disjoint connected regions whose union forms again the entire domain. Multi-dimensional partitionings occur, for example, when analyzing parameter spaces of simulation models, where each segment of the partitioning represents a region of similar model behavior. Having computed a partitioning, one is commonly interested in understanding how large the segments are and which segments lie next to each other. While visual representations of 2D domain partitionings that reveal sizes and neighborhoods are straightforward, this is no longer the case when considering multi-dimensional domains of three or more dimensions. We propose an algorithm for computing 2D embeddings of multi-dimensional partitionings. The embedding shall have the following properties: It shall maintain the topology of the partitioning and optimize the area sizes and joint boundary lengths of the embedded segments to match the respective sizes and lengths in the multi-dimensional domain. We demonstrate the effectiveness of our approach by applying it to different use cases, including the visual exploration of 3D spatial domain segmentations and multi-dimensional parameter space partitionings of simulation ensembles. We numerically evaluate our algorithm with respect to how well sizes and lengths are preserved depending on the dimensionality of the domain and the number of segments.","accessible_pdf":false,"authors":[{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"m_ever14@uni-muenster.de","is_corresponding":true,"name":"Marina Evers"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"linsen@uni-muenster.de","is_corresponding":false,"name":"Lars Linsen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Marina Evers"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1612","time_end":"","time_stamp":"","time_start":"","title":"2D Embeddings of Multi-dimensional Partitionings","uid":"v-full-1612","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present a path-based design model and system for designing and creating visualisations. Our model represents a systematic approach to constructing visual representations of data or concepts following a predefined sequence of steps. The initial step involves outlining the overall appearance of the visualisation by creating a skeleton structure, referred to as a flowpath. Subsequently, we specify objects, visual marks, properties, and appearance, storing them in a gene. Lastly, we map data onto the flowpath, ensuring suitable morphisms. Alternative designs are created by exchanging values in the gene. For example, designs that share similar traits, are created by making small incremental changes to the gene. Our design method develops a wide variety of creative ideas, space-filling visualisations, and traditional designs (bar chart, pie chart etc.) Our implementation, demonstrates the model, and we apply the output visualisations onto a smart-watch and on visualisation dashboards. In this article we (1) introduce, define and explain the path model and discuss possibilities for its use, (2) present our implementation, results, and evaluation, and (3) demonstrate and evaluate an application of its use on a mobile watch.","accessible_pdf":false,"authors":[{"affiliations":["ExaDev, Gaerwen, United Kingdom","Bangor University, Bangor, United Kingdom"],"email":"james.ogge@gmail.com","is_corresponding":false,"name":"James R Jackson"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.ritsos@bangor.ac.uk","is_corresponding":false,"name":"Panagiotis D. Ritsos"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"p.butcher@bangor.ac.uk","is_corresponding":false,"name":"Peter W. S. Butcher"},{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"j.c.roberts@bangor.ac.uk","is_corresponding":true,"name":"Jonathan C Roberts"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jonathan C Roberts"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1613","time_end":"","time_stamp":"","time_start":"","title":"Path-based Design Model for Constructing and Exploring Alternative Visualisations","uid":"v-full-1613","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present Cell2Cell, a novel visual analytics approach for quantifying and visualizing networks of cell-cell interactions in three-dimensional (3D) multi-channel cancerous tissue data. By analyzing cellular interactions, biomedical domain experts can gain a more accurate understanding of the intricate relationships between cancer and immune cells. Recent methods have focused on inferring interaction based on the proximity of cells in low-resolution 2D multi-channel imaging data. By contrast, we analyze cell interactions by quantifying the intensities of protein expressions extracted from high-resolution 3D multi-channel volume data. Such analyses have a strong exploratory nature and require a tight integration of domain experts in the analysis loop to leverage their deep knowledge. We propose two complementary semi-automated approaches to cope with the increasing size and complexity of the data in an interactive fashion: On the one hand, we interpret cell-to-cell interactions as edges in a cell graph and analyze the image signal (protein expressions) along those edges, using spatial as well as abstract data visualizations. Complementary, we propose a cell-centered approach, enabling scientists to visually analyze polarized distributions of proteins in three dimensions, which also captures neighboring cells with biochemical and cell biological consequences. We evaluate our application in two case studies, where computational biologists and medical experts use \\tool to investigate tumor micro-environments to identify and quantify T-cell activation in human tissue data. We confirmed that our tool can fully solve both use cases and enables a streamlined and detailed analysis of cell-cell interactions.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"eric.moerth@gmx.at","is_corresponding":true,"name":"Eric M\u00f6rth"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"kevin.sidak@univie.ac.at","is_corresponding":false,"name":"Kevin Sidak"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"zoltan_maliga@hms.harvard.edu","is_corresponding":false,"name":"Zoltan Maliga"},{"affiliations":["University of Vienna, Vienna, Austria"],"email":"torsten.moeller@univie.ac.at","is_corresponding":false,"name":"Torsten M\u00f6ller"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"peter_sorger@hms.harvard.edu","is_corresponding":false,"name":"Peter Sorger"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"pfister@seas.harvard.edu","is_corresponding":false,"name":"Hanspeter Pfister"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"jbeyer@g.harvard.edu","is_corresponding":false,"name":"Johanna Beyer"},{"affiliations":["New York University, New York, United States","Harvard University, Boston, United States"],"email":"rk4815@nyu.edu","is_corresponding":false,"name":"Robert Kr\u00fcger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Eric M\u00f6rth"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1615","time_end":"","time_stamp":"","time_start":"","title":"Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data","uid":"v-full-1615","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures---often at multiple spatial or semantic scales---across various application domains and requiring diverse visual representations for effective visualization. To understand user reactions to our new environment, we began with an elicitation user study, in which we captured their responses and interactions. We observed that users adapted their interaction approaches based on perceived visual representations, with natural transitions in spatial awareness and actions while navigating across the physical surface. Our findings then informed the development of a design space for spatial data exploration in cross-reality. We thus developed cross-reality environments tailored to three distinct domains: for 3D molecular structure data, for 3D point cloud data, and for 3D anatomical data. In particular, we designed interaction techniques that account for the inherent features of interactions in both spaces, facilitating various forms of interaction including mid-air gestures, touch interactions, pen interactions, and combinations thereof to enhance the users' sense of presence and engagement. We assessed the usability of our environment with biologists, focusing on its use for domain research. In addition, we evaluated our interaction transition designs with virtual and mixed-reality experts to gather further insights. As a result, we provide our design suggestions for the cross-reality environment, emphasizing the interaction with diverse visual representations and seamless interaction transitions between 2D and 3D spaces.","accessible_pdf":false,"authors":[{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"lixiang.zhao17@student.xjtlu.edu.cn","is_corresponding":false,"name":"Lixiang Zhao"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"fuqi.xie20@student.xjtlu.edu.cn","is_corresponding":false,"name":"Fuqi Xie"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"hainingliang@hkust-gz.edu.cn","is_corresponding":false,"name":"Hai-Ning Liang"},{"affiliations":["Xi'an Jiaotong-Liverpool University, Suzhou, China"],"email":"lingyun.yu@xjtlu.edu.cn","is_corresponding":true,"name":"Lingyun Yu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lingyun Yu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1626","time_end":"","time_stamp":"","time_start":"","title":"SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality","uid":"v-full-1626","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"High-dimensional data, characterized by many features, can be difficult to visualize effectively. Dimensionality reduction techniques, such as PCA, UMAP, and t-SNE, address this challenge by projecting the data into a lower-dimensional space while preserving important relationships. TopoMap is another technique that excels at preserving the underlying structure of the data, leading to interpretable visualizations. In particular, TopoMap maps the high-dimensional data into a visual space, guaranteeing that the 0-dimensional persistence diagram of the Rips filtration of the visual space matches the one from the high-dimensional data. However, the original Topomap algorithm can be slow and its layout can be too sparse for large and complex datasets. In this paper, we propose three improvements to TopoMap: 1) a more space-efficient layout, 2) a significantly faster implementation, and 3) a novel treemap-based representation to aid the exploration of the projections. These advancements make TopoMap, now referred to as TopoMap++, a more powerful tool for visualizing high-dimensional data, similar to how t-SNE surpassed SNE in popularity.","accessible_pdf":false,"authors":[{"affiliations":["New York University, New York City, United States"],"email":"vitoriaguardieiro@gmail.com","is_corresponding":true,"name":"Vitoria Guardieiro"},{"affiliations":["New York University, New York City, United States"],"email":"felipedeoliveira1407@gmail.com","is_corresponding":false,"name":"Felipe Inagaki de Oliveira"},{"affiliations":["Microsoft Research India, Bangalore, India"],"email":"harish.doraiswamy@microsoft.com","is_corresponding":false,"name":"Harish Doraiswamy"},{"affiliations":["University of Sao Paulo, Sao Carlos, Brazil"],"email":"gnonato@icmc.usp.br","is_corresponding":false,"name":"Luis Gustavo Nonato"},{"affiliations":["New York University, New York City, United States"],"email":"csilva@nyu.edu","is_corresponding":false,"name":"Claudio Silva"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Vitoria Guardieiro"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1632","time_end":"","time_stamp":"","time_start":"","title":"TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees","uid":"v-full-1632","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Probability density function (PDF) curves are among the few charts on a Cartesian coordinate system that are commonly presented without y-axes. This design decision may be due to the lack of relevance of vertical scaling in normal PDFs. In fact, as long as two normal PDFs have the same mean and standard deviations (SDs), they can be scaled to occupy different amounts of vertical space while still remaining statistically identical. Because unscaled PDF height increases as SD decreases, visualization designers may find themselves tempted to vertically shrink low-SD PDFs to avoid occlusion or save white space in their figures. While irregular vertical scaling has been explored in bar and line charts, the visualization community has yet to investigate how this purely visual manipulation may affect reader comparisons of PDFs. In this paper, we present two preregistered quantitative experiments (n=600, n=401) that systematically demonstrate that vertical scaling can lead to misinterpretations of PDFs. We also test visual interventions to mitigate misinterpretation. In some contexts, we find that including a y-axis reduces this effect. Overall, we find that keeping vertical scaling consistent, and therefore maintaining equal pixel areas under PDF curves, results in the highest likelihood of accurate comparisons. Our findings provide the first insights into the impact of vertical scaling on PDFs, and reveal the complicated nature of proportional area comparisons.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"racquel.fygenson@gmail.com","is_corresponding":true,"name":"Racquel Fygenson"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Racquel Fygenson"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1638","time_end":"","time_stamp":"","time_start":"","title":"The Impact of Vertical Scaling on Normal Probability Density Function Plots","uid":"v-full-1638","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Despite the development of numerous visual analytics tools for event sequence data across various domains, including, but not limited to healthcare, digital marketing, and user behavior analysis, comparing these domain-specific investigations and transferring the results to new datasets and problem areas remain challenging. Task abstractions can help us go beyond domain-specific details, but existing visualization task abstractions are insufficient for event sequence visual analytics because they primarily focus on tabular datasets and often overlook automated analytical techniques. To address this gap, we propose a domain-agnostic multi-level task framework for event sequence analysis, derived from an analysis of 58 papers that present event sequence visualization systems. Our framework consists of four levels: objective, intent, strategy, and technique. Overall objectives identify the main goals of analysis. Intents comprises five high-level approaches adopted at each analysis step: augment data, simplify data, configure data, configure visualization, and create provenance. Each intent is accomplished through a number of strategies, for instance, data simplification can be achieved through aggregation, summarization, or segmentation. Finally, each strategy can be implemented by a set of techniques depending on the input and output components. We further show that techniques can be expressed through a quartet of action-input-output-criteria. We demonstrate the framework\u2019s power through mapping case studies and discuss its similarities and differences with previous event sequence task taxonomies.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, College Park, United States"],"email":"kzintas@umd.edu","is_corresponding":true,"name":"Kazi Tasnim Zinat"},{"affiliations":["University of Maryland, College Park, United States"],"email":"ssakhamu@terpmail.umd.edu","is_corresponding":false,"name":"Saimadhav Naga Sakhamuri"},{"affiliations":["University of Maryland, College Park, United States"],"email":"achen151@terpmail.umd.edu","is_corresponding":false,"name":"Aaron Sun Chen"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kazi Tasnim Zinat"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1642","time_end":"","time_stamp":"","time_start":"","title":"A Multi-Level Task Framework for Event Sequence Analysis","uid":"v-full-1642","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In recent years, the global adoption of electric vehicles (EVs) has surged, prompting a corresponding rise in the installation of charging stations. This proliferation has underscored the importance of expediting the deployment of charging infrastructure. Both academia and industry have thus devoted to addressing the charging station location problem (CSLP) to streamline this process. However, prevailing algorithms addressing CSLP are hampered by restrictive assumptions and computational overhead, leading to a dearth of comprehensive evaluations in the spatiotemporal dimensions. Consequently, their practical viability is restricted. Moreover, the placement of charging stations exerts a significant impact on both the road network and the power grid, which necessitates the evaluation of the potential post-deployment impacts on these interconnected networks holistically. In this study, we propose CSLens, a visual analytics system designed to inform charging station deployment decisions through the lens of coupled transportation and power networks. CSLens offers multiple visualizations and interactive features, empowering users to delve into the existing charging station layout, explore alternative deployment solutions, and assess the ensuring impact. To validate the efficacy of CSLens, we conducted two case studies and engaged in interviews with domain experts. Through these efforts, we substantiated the usability and practical utility of CSLens in enhancing the decision-making process surrounding charging station deployment. Our findings underscore CSLens\u2019s potential to serve as a valuable asset in navigating the complexities of charging infrastructure planning.","accessible_pdf":false,"authors":[{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"zhangyt85@mail2.sysu.edu.cn","is_corresponding":false,"name":"Yutian Zhang"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"xulw8@mail2.sysu.edu.cn","is_corresponding":false,"name":"Liwen Xu"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"taoshc@mail2.sysu.edu.cn","is_corresponding":false,"name":"Shaocong Tao"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"guanqx3@mail.sysu.edu.cn","is_corresponding":false,"name":"Quanxue Guan"},{"affiliations":["ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"},{"affiliations":["Sun Yat-sen University, Shenzhen, China"],"email":"zenghp5@mail.sysu.edu.cn","is_corresponding":true,"name":"Haipeng Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haipeng Zeng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1681","time_end":"","time_stamp":"","time_start":"","title":"CSLens: Towards Better Deploying Charging Stations via Visual Analytics \u2014\u2014 A Coupled Networks Perspective","uid":"v-full-1681","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We introduce a visual analysis method for multiple causality graphs with different outcome variables, namely, multi-outcome causality graphs. Multi-outcome causality graphs are important in healthcare for understanding multimorbidity and comorbidity. To support the visual analysis, we collaborated with medical experts to devise two comparative visualization techniques at different stages of the analysis process. First, a progressive visualization method is proposed for comparing multiple state-of-the-art causal discovery algorithms. The method can handle mixed-type datasets comprising both continuous and categorical variables and assist in the creation of a fine-tuned causality graph of a single outcome. Second, a comparative graph layout technique and specialized visual encodings are devised for the quick comparison of multiple causality graphs. In our visual analysis approach, analysts start by building individual causality graphs for each outcome variable, and then, multi-outcome causality graphs are generated and visualized with our comparative technique for analyzing differences and commonalities of these causality graphs. Evaluation includes quantitative measurements on benchmark datasets, a case study with a medical expert, and expert user studies with real-world health research data.","accessible_pdf":false,"authors":[{"affiliations":["Institute of Medical Technology, Peking University Health Science Center, Beijing, China","National Institute of Health Data Science, Peking University, Beijing, China"],"email":"mengjiefan@bjmu.edu.cn","is_corresponding":true,"name":"Mengjie Fan"},{"affiliations":["Beihang University, Beijing, China","Peking University, Beijing, China"],"email":"yu.jinlu@qq.com","is_corresponding":false,"name":"Jinlu Yu"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"},{"affiliations":["Tongji College of Design and Innovation, Shanghai, China"],"email":"nan.cao@gmail.com","is_corresponding":false,"name":"Nan Cao"},{"affiliations":["Beijing University of Chinese Medicine, Beijing, China"],"email":"wanghuaiyuelva@126.com","is_corresponding":false,"name":"Huaiyu Wang"},{"affiliations":["Peking University, Beijing, China"],"email":"zhoulng@pku.edu.cn","is_corresponding":false,"name":"Liang Zhou"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mengjie Fan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1693","time_end":"","time_stamp":"","time_start":"","title":"Visual Analysis of Multi-outcome Causal Graphs","uid":"v-full-1693","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Room-scale immersive data visualisations provide viewers a wide-scale overview of a large dataset, but to interact precisely with individual data points they typically have to navigate to change their point of view. In traditional screen-based visualisations, focus-and-context techniques allow visualisation users to keep a full dataset in view while making detailed selections. Such techniques have been studied extensively on desktop to allow precise selection within large data sets, but they have not been explored in immersive 3D modalities. In this paper we develop a novel immersive focus-and-context technique based on a ``magic portal'' metaphor adapted specifically for data visualisation scenarios. An extendable-hand interaction technique is used to place a portal close to the region of interest. The other end of the portal then opens comfortably within the user's physical reach such that they can reach through to precisely select individual data points. Through a controlled study with 24 participants, we find strong evidence that portals reduce overshoots in selection and overall hand trajectory length, reducing arm fatigue compared to ranged interaction without the portal. The portals also enable us to use a robot arm to provide haptic feedback for data within the limited volume of the portal region. We demonstrate applications for portal-based selection through two use-case scenarios.","accessible_pdf":false,"authors":[{"affiliations":["Monash University, Melbourne, Australia"],"email":"dai.shaozhang@gmail.com","is_corresponding":true,"name":"Shaozhang Dai"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"yi.li5@monash.edu","is_corresponding":false,"name":"Yi Li"},{"affiliations":["The University of British Columbia (Okanagan Campus), Kelowna, Canada"],"email":"barrett.ens@ubc.ca","is_corresponding":false,"name":"Barrett Ens"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":false,"name":"Lonni Besan\u00e7on"},{"affiliations":["Monash University, Melbourne, Australia"],"email":"tgdwyer@gmail.com","is_corresponding":false,"name":"Tim Dwyer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shaozhang Dai"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1699","time_end":"","time_stamp":"","time_start":"","title":"Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context","uid":"v-full-1699","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Contour trees describe the topology of level sets in scalar fields and are widely used in topological data analysis and visualization. A main challenge for utilizing contour trees for large-scale scientific data is their computation at scale using high-performance computing. To address this challenge, recent work has introduced distributed hierarchical contour trees for distributed computation and storage of contour trees. However, effective use of these distributed structures in analysis and visualization requires subsequent computation of geometric properties and branch decomposition to support contour extraction and exploration. In this work, we introduce distributed algorithms for augmentation, hypersweeps, and branch decomposition that enable parallel computation of geometric properties, and support the use of distributed contour trees as a query structure for scientific exploration. We evaluate the parallel performance of these algorithms and apply them to identify and extract important contours for scientific visualization.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"mingzhefluorite@gmail.com","is_corresponding":true,"name":"Mingzhe Li"},{"affiliations":["University of Leeds, Leeds, United Kingdom"],"email":"h.carr@leeds.ac.uk","is_corresponding":false,"name":"Hamish Carr"},{"affiliations":["Lawrence Berkeley National Laboratory, Berkeley, United States"],"email":"oruebel@lbl.gov","is_corresponding":false,"name":"Oliver R\u00fcbel"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"},{"affiliations":["Lawrence Berkeley National Laboratory, Berkeley, United States"],"email":"ghweber@lbl.gov","is_corresponding":false,"name":"Gunther H Weber"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mingzhe Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1705","time_end":"","time_stamp":"","time_start":"","title":"Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration","uid":"v-full-1705","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The widespread use of Deep Neural Networks (DNNs) has recently resulted in their application to challenging scientific visualization tasks. While advanced DNNs demonstrate impressive generalization abilities, understanding factors like prediction quality, confidence, robustness, and uncertainty is crucial. These insights aid application scientists in making informed decisions. However, DNNs lack inherent mechanisms to measure prediction uncertainty, prompting the creation of distinct frameworks for constructing robust uncertainty-aware models tailored to various visualization tasks. In this work, we develop uncertainty-aware implicit neural representations to model steady-state vector fields effectively. We comprehensively evaluate the efficacy of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout, aimed at enabling uncertainty-informed visual analysis of features within steady vector field data. Our detailed exploration using several vector data sets indicate that uncertainty-aware models generate informative visualization results of vector field features. Furthermore, incorporating prediction uncertainty improves the resilience and interpretability of our DNN model, rendering it applicable for the analysis of complex vector field data sets.","accessible_pdf":false,"authors":[{"affiliations":["Indian Institute of Technology Kanpur , Kanpur, India"],"email":"atulkrfcb@gmail.com","is_corresponding":false,"name":"Atul Kumar"},{"affiliations":["Indian Institute of Technology Kanpur , Kanpur , India"],"email":"gsiddharth2209@gmail.com","is_corresponding":false,"name":"Siddharth Garg"},{"affiliations":["Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India"],"email":"soumya.cvpr@gmail.com","is_corresponding":true,"name":"Soumya Dutta"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Soumya Dutta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1708","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data","uid":"v-full-1708","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"User experience in data visualization is typically assessed through post-viewing self-reports, but these overlook the dynamic cognitive processes during interaction. This study explores the use of mind wandering as a dynamic measure during visualization exploration. Participants reported mind wandering while viewing visualizations from a pre-labeled visualization database and then provided quantitative ratings of trust, engagement, and design quality, along with qualitative descriptions and short-term/long-term recall assessments. Results show that mind wandering negatively affects short-term visualization recall and various post-viewing measures, particularly for visualizations with little text annotation. Further, the type of mind wandering impacts engagement and emotional response. Mind wandering also acts as a serial mediator between visualization design elements and post-viewing measures. Overall, this research underscores the importance of incorporating mind wandering as a dynamic measure in visualization design and evaluation, offering novel avenues for enhancing user engagement and comprehension.","accessible_pdf":false,"authors":[{"affiliations":["Arizona State University, Tempe, United States"],"email":"aarunku5@asu.edu","is_corresponding":true,"name":"Anjana Arunkumar"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"},{"affiliations":["Arizona State University, Tempe, United States"],"email":"cbryan16@asu.edu","is_corresponding":false,"name":"Chris Bryan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anjana Arunkumar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1726","time_end":"","time_stamp":"","time_start":"","title":"Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations","uid":"v-full-1726","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Understanding the input and output of data wrangling scripts is crucial for various tasks like debugging codes and onboarding new data. However, existing research on script understanding primarily focuses on revealing the process of data transformations, lacking the ability to analyze the potential scope, i.e., the space of script inputs and outputs. Meanwhile, constructing input/output space during script analysis is challenging, as the wrangling scripts could be semantically complex and diverse, and the association between different data objects is intricate. To facilitate data workers in understanding the input and output spaces of wrangling scripts, we summarize ten types of constraints to express table spaces, and build a mapping between data transformations and these constraints to guide the construction of the input/output for individual transformations. Then, we propose a constraint generation model for integrating table constraints across multiple transformations. Based on the model, we develop Ferry, an interactive system that extracts and visualizes the data constraints describing the input and output spaces of data wrangling scripts, thereby enabling users to grasp the high-level semantics of complex scripts and locate the origins of faulty data transformations. Besides, Ferry provides example input and output data to assist users in interpreting the extracted constraints, checking and resolving the conflicts between these constraints and any uploaded dataset. Ferry's effectiveness and usability are evaluated via a usage scenario and two case studies: the first assists users in onboarding new data and debugging scripts, while the second verifies input-output compatibility across data processing modules. Furthermore, an illustrative application is presented to demonstrate Ferry's flexibility.","accessible_pdf":false,"authors":[{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"rickyluozs@gmail.com","is_corresponding":true,"name":"Zhongsu Luo"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"kaixiong@zju.edu.cn","is_corresponding":false,"name":"Kai Xiong"},{"affiliations":["Zhejiang University, Hangzhou,Zhejiang, China"],"email":"3220105578@zju.edu.cn","is_corresponding":false,"name":"Jiajun Zhu"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"chenran928@zju.edu.cn","is_corresponding":false,"name":"Ran Chen"},{"affiliations":["Newcastle University, Newcastle Upon Tyne, United Kingdom"],"email":"xinhuan.shu@gmail.com","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":["Zhejiang University, Ningbo, China"],"email":"dweng@zju.edu.cn","is_corresponding":false,"name":"Di Weng"},{"affiliations":["Zhejiang University, Hangzhou, China"],"email":"ycwu@zju.edu.cn","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zhongsu Luo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1730","time_end":"","time_stamp":"","time_start":"","title":"Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts","uid":"v-full-1730","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"As a step towards improving visualization literacy, we investigated how students approach reading visualizations differently after taking a university-level visualization course. We asked students to verbally walk through their process of making sense of unfamiliar visualizations, and conducted a qualitative analysis of these walkthroughs. Our qualitative analysis found changes in students' walkthroughs consistent with explicit learning goals of visualization courses. After taking a visualization course, students also engaged with visualizations in more sophisticated ways not fully captured by explicit learning goals: they were more likely to exhibit design empathy by thinking critically about the tradeoffs behind why a chart was designed in a particular way, and were better able to deconstruct a chart to make sense of it. We also gave students a quantitative assessment of visualization literacy and found no evidence of scores improving after the class, likely because the test we used focused on a different set of skills than those emphasized in visualization classes. While current measurement instruments for visualization literacy are useful, we propose developing standardized assessments for additional aspects of visualization literacy, such as deconstruction and design empathy. We also suggest those additional aspects could be made more explicit in learning goals set by visualization educators. All supplemental materials are available at https://osf.io/w5pum/?view_only=f9eca3fa4711425582d454031b9c482e.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"maryam.hedayati@u.northwestern.edu","is_corresponding":true,"name":"Maryam Hedayati"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maryam Hedayati"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1738","time_end":"","time_stamp":"","time_start":"","title":"What University Students Learn In Visualization Classes","uid":"v-full-1738","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Hypergraphs provide a natural way to represent polyadic relationships in network data. For large hypergraphs, it is often difficult to visually detect structures within the data. Recently, a scalable polygon-based visualization framework was developed allowing hypergraphs with thousands of hyperedges to be simplified and examined at different levels of detail. However, this approach does not consider structures such as cycles, bridges, and branches. Consequently, structures can be lost at simplified scales, making interpretations for real-world applications unreliable. In this paper, we define hypergraph structures using the bipartite graph representation. Powered by our analysis, we provide an algorithm to decompose large hypergraphs into meaningful features and to identify regions of non-planarity. We also introduce a set of topology preserving and topology altering atomic operations, enabling the preservation of important structures while removing topological noise in simplified scales. We demonstrate our approach in several real-world applications.","accessible_pdf":false,"authors":[{"affiliations":["Oregon State University, Corvallis, United States"],"email":"oliverpe@oregonstate.edu","is_corresponding":false,"name":"Peter D Oliver"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhange@eecs.oregonstate.edu","is_corresponding":true,"name":"Eugene Zhang"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhangyue@oregonstate.edu","is_corresponding":false,"name":"Yue Zhang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Eugene Zhang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1746","time_end":"","time_stamp":"","time_start":"","title":"Structure-Aware Simplification for Hypergraph Visualization","uid":"v-full-1746","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The semantic similarity between documents of a text corpus can be visualized using map-like metaphors based on two-dimensional scatterplot layouts. These layouts result from a dimensionality reduction on the document-term matrix or a representation within a latent embedding, including topic models. Thereby, the resulting layout depends on the input data and hyperparameters of the dimensionality reduction and is therefore affected by changes in them. Furthermore, the resulting layout is affected by changes in the input data and hyperparameters of the dimensionality reduction. However, such changes to the layout require additional cognitive efforts from the user. In this work, we present a sensitivity study that analyzes the stability of these layouts concerning (1) changes in the text corpora, (2) changes in the hyperparameter, and (3) randomness in the initialization. Our approach has two stages: data measurement and data analysis. First, we derived layouts for the combination of three text corpora and six text embeddings and a grid-search-inspired hyperparameter selection of the dimensionality reductions. Afterward, we quantified the similarity of the layouts through ten metrics, concerning local and global structures and class separation. Second, we analyzed the resulting 42817 tabular data points in a descriptive statistical analysis. From this, we derived guidelines for informed decisions on the layout algorithm and highlight specific hyperparameter settings. We provide our implementation and results as a Git repository at https://github.com/hpicgs/Topic-Models-and-Dimensionality-Reduction-Sensitivity-Study .","accessible_pdf":false,"authors":[{"affiliations":["University of Potsdam, Digital Engineering Faculty, Hasso Plattner Institute, Potsdam, Germany"],"email":"daniel.atzberger@hpi.de","is_corresponding":true,"name":"Daniel Atzberger"},{"affiliations":["University of Potsdam, Potsdam, Germany"],"email":"tcech@uni-potsdam.de","is_corresponding":false,"name":"Tim Cech"},{"affiliations":["Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany"],"email":"willy.scheibel@hpi.de","is_corresponding":false,"name":"Willy Scheibel"},{"affiliations":["Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany"],"email":"juergen.doellner@hpi.de","is_corresponding":false,"name":"J\u00fcrgen D\u00f6llner"},{"affiliations":["Utrecht University, Utrecht, Netherlands"],"email":"m.behrisch@uu.nl","is_corresponding":false,"name":"Michael Behrisch"},{"affiliations":["Graz University of Technology, Graz, Austria"],"email":"tobias.schreck@cgv.tugraz.at","is_corresponding":false,"name":"Tobias Schreck"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Atzberger"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1770","time_end":"","time_stamp":"","time_start":"","title":"A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations","uid":"v-full-1770","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale complexes, and the discrepancies in topology between original and decompressed datasets could potentially result in erroneous interpretations or even incorrect scientific conclusions. In this paper, we focus on preserving Morse-Smale segmentations in 2D/3D piecewise linear scalar fields, targeting the precise reconstruction of minimum/maximum labels induced by the integral curve of each vertex. The key is to derive a series of edits during compression time; the edits are applied to the decompressed data, leading to an accurate reconstruction of segmentations while keeping the error within the prescribed error bound. To this end, we developed a workflow to fix extrema and integral curves alternatively until convergence within finite iterations; we accelerate each workflow component with shared-memory/GPU parallelism to make the performance practical for coupling with compressors. We demonstrate use cases with fluid dynamics, ocean, and cosmology application datasets with a 1000x acceleration with an NVIDIA A100 GPU.","accessible_pdf":false,"authors":[{"affiliations":["The Ohio State University, Columbus, United States"],"email":"li.14025@osu.edu","is_corresponding":true,"name":"Yuxiao Li"},{"affiliations":["University of California, Riverside, Riverside, United States"],"email":"xlian007@ucr.edu","is_corresponding":false,"name":"Xin Liang"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"qiu.722@osu.edu","is_corresponding":false,"name":"Yongfeng Qiu"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"lyan@anl.gov","is_corresponding":false,"name":"Lin Yan"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuxiao Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1793","time_end":"","time_stamp":"","time_start":"","title":"MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors","uid":"v-full-1793","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In the biomedical domain, visualizing the document embeddings of an extensive corpus has been widely used in information-seeking tasks. However, three key challenges with existing visualizations make it difficult for clinicians to find information efficiently. First, the document embeddings used in these visualizations are generated statically by pretrained language models, which cannot adapt to the user's evolving interest. Second, existing document visualization techniques cannot effectively display how the documents are relevant to users\u2019 interest, making it difficult for users to identify the most pertinent information. Third, existing embedding generation and visualization processes suffer from a lack of interpretability, making it difficult to understand, trust and use the result for decision-making. In this paper, we present a novel visual analytics pipeline for user-driven document representation and iterative information seeking (VADIS). VADIS introduces a prompt-based attention model (PAM) that generates dynamic document embedding and document relevance adjusted to the user's query. To effectively visualize these two pieces of information, we design a new document map that leverages a circular grid layout to display documents based on both their relevance to the query and the semantic similarity. Additionally, to improve the interpretability, we introduce a corpus-level attention visualization method to improve the user's understanding of the model focus and to enable the users to identify potential oversight. This visualization, in turn, empowers users to refine, update and introduce new queries, thereby facilitating a dynamic and iterative information-seeking experience. We evaluated VADIS quantitatively and qualitatively on a real-world dataset of biomedical research papers to demonstrate its effectiveness.","accessible_pdf":false,"authors":[{"affiliations":["Ohio State University, Columbus, United States"],"email":"qiu.580@buckeyemail.osu.edu","is_corresponding":true,"name":"Rui Qiu"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"tu.253@osu.edu","is_corresponding":false,"name":"Yamei Tu"},{"affiliations":["Washington University School of Medicine in St. Louis, St. Louis, United States"],"email":"yenp@wustl.edu","is_corresponding":false,"name":"Po-Yin Yen"},{"affiliations":["The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Rui Qiu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1802","time_end":"","time_stamp":"","time_start":"","title":"VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking","uid":"v-full-1802","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Scalar field comparison is a fundamental task in scientific visualization. In topological data analysis, we compare topological descriptors of scalar fields---such as persistence diagrams and merge trees---as they provide succinct and robust abstract representations. While several similarity measures for topological descriptors seem to be both asymptotically and practically efficient with polynomial time algorithms, they do not scale well when handling large-scale, time-varying scientific data and ensembles. In this paper, we propose a new framework to facilitate the comparative analysis of merge trees, inspired by tools from locality sensitive hashing (LSH). LSH hashes similar objects into the same hash buckets with high probability. We propose two new similarity measures for merge trees that can be computed via LSH, using new extensions to Recursive MinHash and subpath signature, respectively. Our similarity measures are extremely efficient to compute and closely resemble the results of existing measures such as merge tree edit distance or geometric interleaving distance. Our experiments demonstrate the utility of our LSH framework in applications such as shape matching, clustering, key event detection, and ensemble summarization.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, SALT LAKE CITY, United States"],"email":"lyuweiran@gmail.com","is_corresponding":false,"name":"Weiran Lyu"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"g.s.raghavendra@gmail.com","is_corresponding":true,"name":"Raghavendra Sridharamurthy"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jeffp@cs.utah.edu","is_corresponding":false,"name":"Jeff M. Phillips"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Raghavendra Sridharamurthy"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1803","time_end":"","time_stamp":"","time_start":"","title":"Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing","uid":"v-full-1803","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"he optimization of cooling systems is important in many cases, for example for cabin and battery cooling in electric cars. Such an optimization is governed by multiple, conflicting objectives and it is performed across a multi-dimensional parameter space. The extent of the parameter space, the complexity of the non-linear model of the system, as well as the time needed per simulation run and factors that are not modeled in the simulation necessitate an iterative, semi-automatic approach. We present an interactive visual optimization approach, where the user works with a p-h diagram to steer an iterative, guided optimization process. A deep learning (DL) model provides estimates for parameters, given a target characterization of the system, while numerical simulation is used to predict system characteristics for an ensemble of parameter sets. Since the DL model only serves as an approximation of the inverse of the cooling system and since target characteristics can be chosen according to different, competing objectives, an iterative optimization process is realized, developing multiple sets of intermediate solutions, which are visually related to each other. The standard p-h diagram, integrated interactively in this approach, is complemented by a dual, also interactive visual representation of additional expressive measures representing the system characteristics. We show how the known four-points semantic of the p-h diagram meaningfully transfers to the dual data representation. When evaluating this approach with our partners in the automotive domain, we found that our solution helped with the overall comprehension of the cooling system and that it lead to a faster convergence during optimization.","accessible_pdf":false,"authors":[{"affiliations":["VRVis Research Center, Vienna, Austria"],"email":"splechtna@vrvis.at","is_corresponding":false,"name":"Rainer Splechtna"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"behravan@vt.edu","is_corresponding":false,"name":"Majid Behravan"},{"affiliations":["AVL AST doo, Zagreb, Croatia"],"email":"mario.jelovic@avl.com","is_corresponding":false,"name":"Mario Jelovic"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"gracanin@vt.edu","is_corresponding":false,"name":"Denis Gracanin"},{"affiliations":["University of Bergen, Bergen, Norway"],"email":"helwig.hauser@uib.no","is_corresponding":false,"name":"Helwig Hauser"},{"affiliations":["VRVis Research Center, Vienna, Austria"],"email":"matkovic@vrvis.at","is_corresponding":true,"name":"Kresimir Matkovic"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kresimir Matkovic"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1805","time_end":"","time_stamp":"","time_start":"","title":"Interactive Design-of-Experiments: Optimizing a Cooling System","uid":"v-full-1805","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualizing relational data is crucial for understanding complex connections between entities in social networks, political affiliations, or biological interactions. Well-known representations like node-link diagrams and adjacency matrices offer valuable insights, but their effectiveness relies on the ability to identify patterns in the underlying topological structure. Reordering strategies and layout algorithms play a vital role in the visualization process since the arrangement of nodes, edges, or cells influences the visibility of these patterns. The BioFabric visualization combines elements of node-link diagrams and adjacency matrices, leveraging the strengths of both, the visual clarity of node-link diagrams and the tabular organization of adjacency matrices. A unique characteristic of BioFabric is the possibility to reorder nodes and edges separately. This raises the question of which combination of layout algorithms best reveals certain patterns. In this paper, we discuss patterns and anti-patterns in BioFabric, such as staircases or escalators, relate them to already established patterns, and propose metrics to evaluate their quality. Based on these quality metrics, we compared combinations of well-established reordering techniques applied to BioFabric with a well-known benchmark data set. Our experiments indicate that the edge order has a stronger influence on revealing patterns than the node layout. The results show that the best combination for revealing staircases is a barycentric node layout, together with an edge order based on node indices and length. Our research contributes a first building block for many promising future research directions, which we also share and discuss. A free copy of this paper and all supplemental materials are available at OSF.","accessible_pdf":false,"authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"fuchs@dbvis.inf.uni-konstanz.de","is_corresponding":true,"name":"Johannes Fuchs"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"alexander.frings@uni-konstanz.de","is_corresponding":false,"name":"Alexander Frings"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"maria-viktoria.heinle@uni-konstanz.de","is_corresponding":false,"name":"Maria-Viktoria Heinle"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"keim@uni-konstanz.de","is_corresponding":false,"name":"Daniel Keim"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"sara.di-bartolomeo@uni-konstanz.de","is_corresponding":false,"name":"Sara Di Bartolomeo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Johannes Fuchs"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1809","time_end":"","time_stamp":"","time_start":"","title":"Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations","uid":"v-full-1809","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Classical bibliography, by scrutinizing preserved catalogs from both official archives and personal collections of accumulated books, examines the books throughout history, thereby elucidating cultural development across historical periods. In this work, we collaborate with domain experts to accomplish the task of data annotation concerning Chinese ancient catalogs. We introduce the CataAnno system that facilitates users in completing annotations more efficiently through cross-linked views, recommendation methods and convenient annotation interactions. The recommendation method can learn the background knowledge and annotation patterns that experts subconsciously integrate into the data during prior annotation processes. CataAnno searches for the most relevant examples previously annotated and recommends to the user. Meanwhile, the cross-linked views assist users in comprehending the correlations between entries and offer explanations for these recommendations. Evaluation and expert feedback confirm that the CataAnno system, by offering high-quality recommendations and visualizing the relationships between entries, can mitigate the necessity for specialized knowledge during the annotation process. This results in enhanced accuracy and consistency in annotations, thereby enhancing the overall efficiency","accessible_pdf":false,"authors":[{"affiliations":["Peking University, Beijing, China"],"email":"hanning.shao@pku.edu.cn","is_corresponding":true,"name":"Hanning Shao"},{"affiliations":["Peking University, Beijing, China"],"email":"xiaoru.yuan@pku.edu.cn","is_corresponding":false,"name":"Xiaoru Yuan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hanning Shao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1810","time_end":"","time_stamp":"","time_start":"","title":"CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation","uid":"v-full-1810","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Over the past decade, several urban visual analytics systems have been proposed to tackle a host of challenges faced by cities, in areas as diverse as transportation, weather, and real estate. Many of these systems have been designed through engagement with urban experts, aiming to distill intricate urban analysis workflows into interactive visualizations and interfaces. The design, implementation, and practical use of these systems, however, still rely on siloed approaches that lead to bespoke tools that are hard to reproduce and extend. At the design level, these systems undervalue rich data workflows from urban experts by usually only treating them as data providers and evaluators. At the implementation level, these systems lack interoperability with other technical frameworks. At the practical use level, these systems tend to be narrowly focused on specific fields, inadvertently creating barriers for cross-domain collaboration. To tackle these gaps, we present Curio, a framework for collaborative urban visual analytics. Curio uses a dataflow model with multiple abstraction levels (code, grammar, GUI elements) to facilitate collaboration across the design and implementation of visual analytics components. The framework allows experts to intertwine preprocessing, managing, and visualization stages while tracking provenance of code and visualizations. In collaboration with urban experts, we evaluate Curio through a diverse series of use cases targeting urban accessibility, urban microclimate, and sunlight access. These cases use different types of urban data and domain methodologies to illustrate Curio's flexibility in tackling pressing societal challenges.","accessible_pdf":false,"authors":[{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"gmorei3@uic.edu","is_corresponding":false,"name":"Gustavo Moreira"},{"affiliations":["Massachusetts Institute of Technology , Somerville, United States"],"email":"maryamh@mit.edu","is_corresponding":false,"name":"Maryam Hosseini"},{"affiliations":["University of Illinois Urbana-Champaign, Urbana-Champaign, United States"],"email":"carolinavfs@id.uff.br","is_corresponding":false,"name":"Carolina Veiga Ferreira de Souza"},{"affiliations":["Universidade Federal Fluminense, Niteroi, Brazil"],"email":"lucasalexandre.s.cc@gmail.com","is_corresponding":false,"name":"Lucas Alexandre"},{"affiliations":["Politecnico di Milano, Milano, Italy"],"email":"nicola.colaninno@polimi.it","is_corresponding":false,"name":"Nicola Colaninno"},{"affiliations":["Universidade Federal Fluminense, Niter\u00f3i, Brazil"],"email":"danielcmo@ic.uff.br","is_corresponding":false,"name":"Daniel de Oliveira"},{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"nivan@cin.ufpe.br","is_corresponding":false,"name":"Nivan Ferreira"},{"affiliations":["Universidade Federal Fluminense , Niteroi, Brazil"],"email":"mlage@ic.uff.br","is_corresponding":false,"name":"Marcos Lage"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"fabiom@uic.edu","is_corresponding":true,"name":"Fabio Miranda"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fabio Miranda"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1830","time_end":"","time_stamp":"","time_start":"","title":"Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics","uid":"v-full-1830","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"When using exploratory visual analysis to examine multivariate hierarchical data, users often need to query data to narrow down the scope of analysis. However, formulating effective query expressions remains a challenge for multivariate hierarchical data, particularly when datasets become very large. To address this issue, we develop a declarative grammar, HiRegEx (Hierarchical data Regular Expression), for querying and exploring multivariate hierarchical data. Rooted in the extended multi-level task topology framework for tree visualizations (e-MLTT), HiRegEx delineates three query targets (node, path, and subtree) and two aspects for querying these targets (features and positions), and uses operators developed based on classical regular expressions for query construction. We develop a prototype system, TreeQueryER, to integrate an exploratory framework for querying and exploring multivariate hierarchical data based on HiRegEx. The exploratory framework includes three major components: top-down pattern specification, bottom-up data-driven inquiry, and context-creation data overview. We validate the expressiveness of HiRegEx with the tasks from the e-MLTT framework and showcase its utility and effectiveness through a usage scenario involving expert users in the analysis of a citation tree dataset.","accessible_pdf":false,"authors":[{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"guozhg.li@gmail.com","is_corresponding":true,"name":"Guozheng Li"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"haotian.mi1@gmail.com","is_corresponding":false,"name":"haotian mi"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"liuchi02@gmail.com","is_corresponding":false,"name":"Chi Harold Liu"},{"affiliations":["Ochanomizu University, Tokyo, Japan"],"email":"itot@is.ocha.ac.jp","is_corresponding":false,"name":"Takayuki Itoh"},{"affiliations":["Beijing Institute of Technology, Beijing, China"],"email":"wanggrbit@126.com","is_corresponding":false,"name":"Guoren Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Guozheng Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1831","time_end":"","time_stamp":"","time_start":"","title":"HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data","uid":"v-full-1831","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The concept of an intelligent augmented reality (AR) assistant has applications as significant as they are wide-ranging, with potential uses in medicine, military endeavors, and mechanics. Such an assistant must be able to perceive the performer\u2019s environment and actions, reason about the state of the environment in relation to a given task, and seamlessly interact with the performer. These interactions typically involve an AR headset equipped with a variety of sensors which capture video, audio, and haptic feedback. Previous works have sought to facilitate the development of such an assistant by visualizing these sensor data streams as well as the machine learning model outputs that support an assistant\u2019s perception and reasoning capabilities. However, existing visual analytics systems do not include biometric data or focus on user modeling, and are only capable of visualizing a single task session for a single performer at a time. Furthermore, they mainly focus on traditional task analysis that typically assumes a linear progression from one step to the next. We propose a visual analytics system that allows users to compare performance during multiple task sessions focusing on non-linear tasks where different paths or sequences can lead to the successful completion of the task. In particular, we design visualizations for understanding user behavior through functional near-infrared spectroscopy (fNIRS) data as a proxy for perception, attention, and memory as well as corresponding motion data (acceleration, angular velocity, and eye gaze). We distill these insights into visual embeddings that allow users to easily select groups of sessions with similar behaviors. We provide case studies that explore how insights into task performance can be gleaned from these visualizations using data collected during helicopter copilot training tasks. Finally, we evaluate our approach by conducting an in-depth examination of a think-aloud experiment with five domain experts.","accessible_pdf":false,"authors":[{"affiliations":["New York University, New York, United States"],"email":"s.castelo@nyu.edu","is_corresponding":true,"name":"Sonia Castelo Quispe"},{"affiliations":["New York University, New York, United States"],"email":"jlrulff@gmail.com","is_corresponding":false,"name":"Jo\u00e3o Rulff"},{"affiliations":["New York University, Brooklyn, United States"],"email":"pss442@nyu.edu","is_corresponding":false,"name":"Parikshit Solunke"},{"affiliations":["New York University, New York, United States"],"email":"erin.mcgowan@nyu.edu","is_corresponding":false,"name":"Erin McGowan"},{"affiliations":["New York University, New York CIty, United States"],"email":"guandewu@nyu.edu","is_corresponding":false,"name":"Guande Wu"},{"affiliations":["New York University, Brooklyn, United States"],"email":"iran@ccrma.stanford.edu","is_corresponding":false,"name":"Iran Roman"},{"affiliations":["New York University, New York, United States"],"email":"rlopez@nyu.edu","is_corresponding":false,"name":"Roque Lopez"},{"affiliations":["New York University, Brooklyn, United States"],"email":"bs3639@nyu.edu","is_corresponding":false,"name":"Bea Steers"},{"affiliations":["New York University, New York, United States"],"email":"qisun@nyu.edu","is_corresponding":false,"name":"Qi Sun"},{"affiliations":["New York University, New York, United States"],"email":"jpbello@nyu.edu","is_corresponding":false,"name":"Juan Pablo Bello"},{"affiliations":["Northrop Grumman Mission Systems, Redondo Beach, United States"],"email":"bradley.feest@ngc.com","is_corresponding":false,"name":"Bradley S Feest"},{"affiliations":["Northrop Grumman, Aurora, United States"],"email":"michael.middleton@ngc.com","is_corresponding":false,"name":"Michael Middleton"},{"affiliations":["Northrop Grumman, Falls Church, United States"],"email":"ryan.mckendrick@ngc.com","is_corresponding":false,"name":"Ryan McKendrick"},{"affiliations":["New York University, New York City, United States"],"email":"csilva@nyu.edu","is_corresponding":false,"name":"Claudio Silva"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sonia Castelo Quispe"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1833","time_end":"","time_stamp":"","time_start":"","title":"HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems","uid":"v-full-1833","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Although shapes can be a finite number compared to colors, they can not be represented by a numerical space, making it difficult to propose a general guideline for shape choices or shed light on the design heuristics of designer-crafted shape palettes. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks -- relative mean judgment tasks, expert choices, and data correlation estimation. Given how complex and tangled results are, rather than relying on conventional features for modeling, we built a model and introduced a corresponding design tool that offers recommendations for shape encodings. The perceptual effectiveness of shapes significantly varies across specific pairs, and certain shapes may enhance perceptual efficiency and accuracy. However, how performance varies does not map well to classical features of shape such as angles, fill, or convex hull. We developed a model based on pairwise relations between shapes measured in our experiments and the number of shapes required to intelligently recommend shape palettes for a given design. This tool provides designers with agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances the understanding of shape perception in visualization contexts and provides practical design guidelines for advanced shape usage in visualization design that optimize perceptual efficiency.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"chint@cs.unc.edu","is_corresponding":true,"name":"Chin Tseng"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":false,"name":"Ghulam Jilani Quadri"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chin Tseng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1836","time_end":"","time_stamp":"","time_start":"","title":"An Empirically Grounded Approach for Designing Shape Palettes","uid":"v-full-1836","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics (IVD) consumables poses a significant threat to patients. Objective data-driven decision making on the severity of contamination is key for reducing risk to patients, while saving time and cost in the quality assessment process. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings of the current process are analysis problems, like weak support in exploring thousands of particle images, associated attributes, and ineffective knowledge externalization for sense-making. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study\u2019s learnings, and a generalizable approach for knowledge externalization. DaedalusData is a visual analytics system that empowers domain experts to explore particle contamination patterns, to label particles in label alphabets, and to externalize knowledge through semi-supervised label-informed data projections. The results of our case study show that DaedalusData supports experts in generating meaningful, comprehensive data overviews. Additionally, our user study evaluation shows high usability of DaedalusData and efficiently supports the labeling of large quantities of particles, and utilizes externalized knowledge to augment the dataset. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalabilty and trade-offs that come with the adoption of this approach in practice.","accessible_pdf":false,"authors":[{"affiliations":["University of Z\u00fcrich, Z\u00fcrich, Switzerland","Roche pRED, Basel, Switzerland"],"email":"alexander.wyss@protonmail.com","is_corresponding":true,"name":"Alexander Wyss"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"gab.morgenshtern@gmail.com","is_corresponding":false,"name":"Gabriela Morgenshtern"},{"affiliations":["Roche Diagnostics International, Rotkreuz, Switzerland"],"email":"a.hirschhuesler@gmail.com","is_corresponding":false,"name":"Amanda Hirsch-H\u00fcsler"},{"affiliations":["University of Zurich, Zurich, Switzerland"],"email":"bernard@ifi.uzh.ch","is_corresponding":false,"name":"J\u00fcrgen Bernard"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alexander Wyss"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1865","time_end":"","time_stamp":"","time_start":"","title":"DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study","uid":"v-full-1865","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Feature grid Scene Representation Networks (SRNs) have been applied to scientific data as compact functional surrogates for analysis and visualization. As SRNs are black-box lossy data representations, assessing the prediction quality is critical for scientific visualization applications to ensure that scientists can trust the information being visualized. Currently, existing architectures do not support inference time reconstruction quality assessment, as voxel-wise errors cannot be evaluated in the absence of ground truth data. By employing uncertain neural network architectures in feature grid SRNs, we obtain prediction variances during inference time to facilitate confidence-aware data reconstruction. Specifically, we propose a parameter-efficient multi-decoder Ensemble SRN (E-SRN) architecture consisting of a shared feature grid with multiple lightweight multi-layer perceptron decoders. E-SRN can generate a set of plausible predictions for a given input coordinate to compute the mean as the ensemble prediction and the variance as a confidence score. The voxel-wise variance can be rendered along with the data to inform the reconstruction quality, or be integrated into uncertainty-aware volume visualization algorithms. To prevent the misalignment between the quantified variance and the prediction quality, we propose a novel variance regularization loss for ensemble learning that promotes the Regularized Ensemble SRN (RE-SRN) to obtain a more reliable variance that correlates closely to the true model error. We comprehensively evaluate the quality of variance quantification and data reconstruction of Monte Carlo Dropout (MCD), Mean Field Variational Inference (MFVI), Deep Ensemble (DE), and Predicting Variance (PV) in comparison with our proposed E-SRN and RE-SRN applied to state-of-the-art feature grid SRNs across diverse scalar field datasets. We demonstrate that RE-SRN attains the most accurate data reconstruction and competitive variance-error correlation among uncertain SRNs under the same neural network parameter budgets. Furthermore, we present an adaptation of uncertainty-aware volume rendering and shed light on the potential of incorporating uncertain predictions in improving the quality of volume rendering for uncertain SRNs. Through ablation studies on the regularization strength and ensemble size, we show that E-SRN and RE-SRN are expected to perform sufficiently well with a default configuration without requiring customized hyperparameter settings for different datasets.","accessible_pdf":false,"authors":[{"affiliations":["The Ohio State University, Columbus, United States"],"email":"xiong.336@osu.edu","is_corresponding":true,"name":"Tianyu Xiong"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"wurster.18@osu.edu","is_corresponding":false,"name":"Skylar Wolfgang Wurster"},{"affiliations":["The Ohio State University, Columbus, United States","Argonne National Laboratory, Lemont, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"tpeterka@mcs.anl.gov","is_corresponding":false,"name":"Tom Peterka"},{"affiliations":["The Ohio State University , Columbus , United States"],"email":"hwshen@cse.ohio-state.edu","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tianyu Xiong"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1866","time_end":"","time_stamp":"","time_start":"","title":"Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network","uid":"v-full-1866","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A layered network is an important category of graph in which every node is assigned to a layer and layers are drawn as parallel or radial lines. They are commonly used to display temporal data or hierarchical networks. Previous research has demonstrated that minimizing edge crossings is the most important criterion to consider when looking to improve the readability of such networks. While heuristic approaches exist for crossing minimization, we are interested in optimal approaches to the problem that prioritize human readability over computational scalability. We aim to improve the usefulness and applicability of such optimal methods by understanding and improving their scalability to larger graphs. This paper categorizes and evaluates the state-of-the-art linear programming formulations for exact crossing minimization and describes nine new and existing techniques that could plausibly accelerate the optimization algorithm. Through a computational evaluation, we explore each technique's effect on calculation time and how the techniques assist or inhibit one another, allowing researchers and practitioners to adapt them to the characteristics of their networks. Our best-performing techniques yielded a median improvement of 2.5--17x depending on the solver used, giving us the capability to create optimal layouts faster and for larger networks. We provide an open-source implementation of our methodology in Python, where users can pick which combination of techniques to enable according to their use case. A free copy of this paper and all supplemental materials, datasets used, and source code are available at {https://osf.io/}.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"wilson.conn@northeastern.edu","is_corresponding":true,"name":"Connor Wilson"},{"affiliations":["Northeastern University, Boston, United States"],"email":"eduardopuertac@gmail.com","is_corresponding":false,"name":"Eduardo Puerta"},{"affiliations":["northeastern university, Boston, United States"],"email":"turokhunter@gmail.com","is_corresponding":false,"name":"Tarik Crnovrsanin"},{"affiliations":["University of Konstanz, Konstanz, Germany","Northeastern University, Boston, United States"],"email":"sara.di-bartolomeo@uni-konstanz.de","is_corresponding":false,"name":"Sara Di Bartolomeo"},{"affiliations":["Northeastern University, Boston, United States"],"email":"c.dunne@northeastern.edu","is_corresponding":false,"name":"Cody Dunne"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Connor Wilson"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1874","time_end":"","time_stamp":"","time_start":"","title":"Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings","uid":"v-full-1874","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Merge trees are a valuable tool in scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the merge tree neural networks (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how graph neural networks (GNNs), which emerged as an effective encoder for graphs, can be trained to produce embeddings of merge trees in vector spaces that enable efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100x on the benchmark datasets while maintaining an error rate below 0.1%.","accessible_pdf":false,"authors":[{"affiliations":["Tulane University, New Orleans, United States"],"email":"yqin2@tulane.edu","is_corresponding":true,"name":"Yu Qin"},{"affiliations":["Montana State University, Bozeman, United States"],"email":"brittany.fasy@montana.edu","is_corresponding":false,"name":"Brittany Terese Fasy"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"cwenk@tulane.edu","is_corresponding":false,"name":"Carola Wenk"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"bsumma@tulane.edu","is_corresponding":false,"name":"Brian Summa"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yu Qin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1880","time_end":"","time_stamp":"","time_start":"","title":"Rapid and Precise Topological Comparison with Merge Tree Neural Networks","uid":"v-full-1880","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefits to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifier users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically \u201csee\u201d the chart content. They ascribed these challenges mainly to the magnification-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method significantly improved the usability of charts over both the status quo screen magnifier and a state-of-the-art space compaction-based solution.","accessible_pdf":false,"authors":[{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"yprak001@odu.edu","is_corresponding":true,"name":"Yash Prakash"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"pkhan002@odu.edu","is_corresponding":false,"name":"Pathan Aseef Khan"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"anaya001@odu.edu","is_corresponding":false,"name":"Akshay Kolgar Nayak"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"uksjayarathna@gmail.com","is_corresponding":false,"name":"Sampath Jayarathna"},{"affiliations":["Michigan State University, East Lansing, United States"],"email":"leehaena@msu.edu","is_corresponding":false,"name":"Hae-Na Lee"},{"affiliations":["Old Dominion University, Norfolk, United States"],"email":"vganjigu@odu.edu","is_corresponding":false,"name":"Vikas Ashok"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yash Prakash"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"full0","slot_id":"v-full-1917","time_end":"","time_stamp":"","time_start":"","title":"Towards Enhancing Low Vision Usability of Data Charts on Smartphones","uid":"v-full-1917","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"Full Papers","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"v-ismar":{"event":"ISMAR Invited Partnership Presentations","event_description":"","event_prefix":"v-ismar","event_type":"invited","event_url":"","long_name":"ISMAR Invited Partnership Presentations","organizers":[],"sessions":[]},"v-panels":{"event":"VIS Panels","event_description":"","event_prefix":"v-panels","event_type":"panel","event_url":"","long_name":"VIS Panels","organizers":[],"sessions":[]},"v-short":{"event":"VIS Short Papers","event_description":"","event_prefix":"v-short","event_type":"short","event_url":"","long_name":"VIS Short Papers","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"v-short","ff_link":"","session_id":"short0","session_image":"short0.png","time_end":"","time_slots":[{"abstract":"From dirty data to intentional deception, there are many threats to the validity of data-driven decisions. Making use of data, especially new or unfamiliar data, therefore requires a degree of trust or verification. How is this trust established? In this paper, we present the results of a series of interviews with both producers and consumers of data artifacts (outputs of data ecosystems like spreadsheets, charts, and dashboards) aimed at understanding strategies and obstacles to building trust in data. We find a recurring need, but lack of existing standards, for data validation and verification, especially among data consumers. We therefore propose a set of data guards: methods and tools for fostering trust in data artifacts.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"nicole.sultanum@gmail.com","is_corresponding":true,"name":"Nicole Sultanum"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"bromley.denny@gmail.com","is_corresponding":false,"name":"Dennis Bromley"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicole Sultanum"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1040","time_end":"","time_stamp":"","time_start":"","title":"Data Guards: Challenges and Solutions for Fostering Trust in Data","uid":"v-short-1040","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In the rapidly evolving field of deep learning, the traditional methodologies for designing deep learning models predominantly rely on code-based frameworks. While these approaches provide flexibility, they also create a significant barrier to entry for non-experts and obscure the immediate impact of architectural decisions on model performance. In response to this challenge, recent no-code approaches have been developed with the aim of enabling easy model development through graphical interfaces. However, both traditional and no-code methodologies share a common limitation that the inability to predict model outcomes or identify issues without executing the model. To address this limitation, we introduce an intuitive visual feedback-based no-code approach to visualize and analyze deep learning models during the design phase. This approach utilizes dataflow-based visual programming with dynamic visual encoding of model architecture. A user study was conducted with deep learning developers to demonstrate the effectiveness of our approach in enhancing the model design process, improving model understanding, and facilitating a more intuitive development experience. The findings of this study suggest that real-time architectural visualization significantly contributes to more efficient model development and a deeper understanding of model behaviors.","accessible_pdf":false,"authors":[{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of","Korea University, Seoul, Korea, Republic of"],"email":"juny0603@gmail.com","is_corresponding":true,"name":"JunYoung Choi"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of"],"email":"wings159@vience.co.kr","is_corresponding":false,"name":"Sohee Park"},{"affiliations":["Korea University, Seoul, Korea, Republic of"],"email":"hellenkoh@gmail.com","is_corresponding":false,"name":"GaYeon Koh"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of"],"email":"k0seo0330@vience.co.kr","is_corresponding":false,"name":"Youngseo Kim"},{"affiliations":["VIENCE Inc., Seoul, Korea, Republic of","Korea University, Seoul, Korea, Republic of"],"email":"wkjeong@korea.ac.kr","is_corresponding":false,"name":"Won-Ki Jeong"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["JunYoung Choi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1047","time_end":"","time_stamp":"","time_start":"","title":"Intuitive Design of Deep Learning Models through Visual Feedback","uid":"v-short-1047","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This comparative study evaluates various neural surface reconstruction methods, particularly focusing on their implications for scientific visualization through reconstructing 3D surfaces via multi-view rendering images. We categorize ten methods into neural radiance fields and neural implicit surfaces, uncovering the benefits of leveraging distance functions (i.e., SDFs and UDFs) to enhance the accuracy and smoothness of the reconstructed surfaces. Our findings highlight the efficiency and quality of NeuS2 for reconstructing closed surfaces and identify NeUDF as a promising candidate for reconstructing open surfaces despite some limitations. We further pinpoint directions for future research, including improving detail capture, optimizing UDF computations, and refining surface extraction methods. By sharing our benchmark dataset, we invite researchers to test the performance of their methods, contributing to the advancement of surface reconstruction solutions for scientific visualization.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"syao2@nd.edu","is_corresponding":true,"name":"Siyuan Yao"},{"affiliations":["Wuhan University, Wuhan, China"],"email":"song.wx@whu.edu.cn","is_corresponding":false,"name":"Weixi Song"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Siyuan Yao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1049","time_end":"","time_stamp":"","time_start":"","title":"A Comparative Study of Neural Surface Reconstruction for Scientific Visualization","uid":"v-short-1049","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Direct volume rendering using ray-casting is widely used in practice. By using GPUs and applying acceleration techniques as empty space skipping, high frame rates are possible on modern hardware. This enables performance-critical use-cases such as virtual reality volume rendering. The currently fastest known technique uses volumetric distance maps to skip empty sections of the volume during ray-casting but requires the distance map to be updated per transfer function change. In this paper, we demonstrate a technique for subdividing the volume intensity range into partitions and deriving what we call partitioned distance maps. These can be used to accelerate the distance map computation for a newly changed transfer function by a factor up to 30. This allows the currently fastest known empty space skipping approach to be used while maintaining high frame rates even when the transfer function is changed frequently.","accessible_pdf":false,"authors":[{"affiliations":["University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria"],"email":"michael.rauter@fhwn.ac.at","is_corresponding":true,"name":"Michael Rauter"},{"affiliations":["Medical University of Vienna, Vienna, Austria"],"email":"lukas.a.zimmermann@meduniwien.ac.at","is_corresponding":false,"name":"Lukas Zimmermann PhD"},{"affiliations":["University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria"],"email":"markus.zeilinger@fhwn.ac.at","is_corresponding":false,"name":"Markus Zeilinger PhD"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Rauter"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1054","time_end":"","time_stamp":"","time_start":"","title":"Accelerating Transfer Function Update for Distance Map based Volume Rendering","uid":"v-short-1054","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present FCNR, a fast compressive neural representation for tens of thousands of visualization images under varying viewpoints and timesteps. The existing NeRVI solution, albeit enjoying a high compression rate, incurs slow speeds in encoding and decoding. Built on the recent advances in stereo image compression, FCNR assimilates stereo context modules and joint context transfer modules to compress image pairs. Our solution significantly improves encoding and decoding speed while maintaining high reconstruction quality and satisfying compression rate. To demonstrate its effectiveness, we compare FCNR with state-of-the-art neural compression methods, including E-NeRV, HNeRV, NeRVI, and ECSIC.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"ylu25@nd.edu","is_corresponding":true,"name":"Yunfei Lu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"pgu@nd.edu","is_corresponding":false,"name":"Pengfei Gu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yunfei Lu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1056","time_end":"","time_stamp":"","time_start":"","title":"FCNR: Fast Compressive Neural Representation of Visualization Images","uid":"v-short-1056","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Real-world datasets often consist of quantitative and categorical variables. The analyst needs to focus on either kind separately or both jointly. We proposed a visualization technique tackling these challenges that supports visual cluster and set analysis. In this paper, we investigate how its visualization parameters affect the accuracy and speed of cluster and set analysis tasks in a controlled experiment. Our findings show that, with the proper settings, our visualization can support both task types well. However, we did not find settings suitable for the joint task, which provides opportunities for future research.","accessible_pdf":false,"authors":[{"affiliations":["TU Wien, Vienna, Austria"],"email":"nikolaus.piccolotto@tuwien.ac.at","is_corresponding":true,"name":"Nikolaus Piccolotto"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"mwallinger@ac.tuwien.ac.at","is_corresponding":false,"name":"Markus Wallinger"},{"affiliations":["Institute of Visual Computing and Human-Centered Technology, Vienna, Austria"],"email":"miksch@ifs.tuwien.ac.at","is_corresponding":false,"name":"Silvia Miksch"},{"affiliations":["TU Wien, Vienna, Austria"],"email":"markus.boegl@tuwien.ac.at","is_corresponding":false,"name":"Markus B\u00f6gl"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nikolaus Piccolotto"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1057","time_end":"","time_stamp":"","time_start":"","title":"On Combined Visual Cluster and Set Analysis","uid":"v-short-1057","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Semantic interaction (SI) in Dimension Reduction (DR) of images allows users to incorporate feedback through direct manipulation of the 2D positions of images. Through interaction, users specify a set of pairwise relationships that the DR should aim to capture. Existing methods for images incorporate feedback into the DR through feature weights on abstract embedding features. However, if the original embedding features do not suitably capture the users task then the DR cannot either. We propose, ImageSI, an SI method for image DR that incorporates user feedback directly into the image model to update the underlying embeddings, rather than weighting them. In doing so, ImageSI ensures that the embeddings suitably capture the features necessary for the task so that the DR can subsequently organize images using those features. We present two variations of ImageSI using different loss functions - ImageSI_MDS-Inverse , which prioritizes the explicit pairwise relationships from the interaction and ImageSI_Triplet, which prioritizes clustering, using the interaction to define groups of images. Finally, we present a usage scenario and a simulation-based evaluation to demonstrate the utility of ImageSI and compare it to current methods.","accessible_pdf":false,"authors":[{"affiliations":["Vriginia Tech, Blacksburg, United States"],"email":"jiayuelin@vt.edu","is_corresponding":false,"name":"Jiayue Lin"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":true,"name":"Rebecca Faust"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Rebecca Faust"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1058","time_end":"","time_stamp":"","time_start":"","title":"ImageSI: Semantic Interaction for Deep Learning Image Projections","uid":"v-short-1058","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Gantt charts are a widely-used idiom for visualizing temporal discrete event sequence data where dependencies exist between events. They are popular in domains such as manufacturing and computing for their intuitive layout of such data. However, these domains frequently generate data at scales which tax both the visual representation and the ability to render it at interactive speeds. To aid visualization developers who use Gantt charts in these situations, we develop a task taxonomy of low level visualization tasks supported by Gantt charts and connect them to the data queries needed to support them. Our taxonomy is derived through a systematic literature survey of visualizations using Gantt charts over the past 30 years.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"sayefsakin@sci.utah.edu","is_corresponding":true,"name":"Sayef Azad Sakin"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sayef Azad Sakin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1059","time_end":"","time_stamp":"","time_start":"","title":"A Literature-based Visualization Task Taxonomy for Gantt charts","uid":"v-short-1059","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Annotations are a critical component of visualizations, helping viewers interpret the visual representation and highlighting critical data insights. Despite its significant role, we lack an understanding of how annotations can be incorporated into other data representations, such as physicalizations and sonifications. Given the emergent nature of these representations, sonifications and physicalizations lack formalized conventions (e.g., design space, vocabulary) that can introduce challenges for audiences to interpret the intended data encoding. To address this challenge, this work focuses on how annotations can be more tightly integrated into the design process of creating sonifications and physicalization. In an exploratory study with 13 designers, we explore how visualization annotation techniques can be adapted to sonic and physical modalities. Our work highlights how annotations for sonification and physicalizations are inseparable from their data encodings","accessible_pdf":false,"authors":[{"affiliations":["Whitman College, Walla Walla, United States"],"email":"sorensor@whitman.edu","is_corresponding":false,"name":"Rhys Sorenson-Graff"},{"affiliations":["University of Colorado Boulder, Boulder, United States"],"email":"sandra.bae@colorado.edu","is_corresponding":true,"name":"S. Sandra Bae"},{"affiliations":["Whitman College, Walla Walla, United States"],"email":"wirfsbro@colorado.edu","is_corresponding":false,"name":"Jordan Wirfs-Brock"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["S. Sandra Bae"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1062","time_end":"","time_stamp":"","time_start":"","title":"Integrating Annotations into the Design Process for Sonifications and Physicalizations","uid":"v-short-1062","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Large Language Models (LLMs) have demonstrated remarkable versatility in visualization authoring, but often generate suboptimal designs that are invalid or fail to adhere to design guidelines for effective visualization. We present Bavisitter, a natural language interface that integrates established visualization design guidelines into LLMs. Based on our survey on the design issues in LLM-generated visualizations, Bavisitter monitors the generated visualizations during a visualization authoring dialogue to detect an issue. When an issue is detected, it intervenes in the dialogue, suggesting possible solutions to the issue by modifying the prompts. We also demonstrate two use cases where Bavisitter detects and resolves design issues from the actual LLM-generated visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jiwnchoi@skku.edu","is_corresponding":true,"name":"Jiwon Choi"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"dlwodnd00@skku.edu","is_corresponding":false,"name":"Jaeung Lee"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jmjo@skku.edu","is_corresponding":false,"name":"Jaemin Jo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jiwon Choi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1064","time_end":"","time_stamp":"","time_start":"","title":"Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring","uid":"v-short-1064","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Although many dimensionality reduction (DR) techniques employ stochastic methods for computational efficiency, such as negative sampling or stochastic gradient descent, their impact on the projection has been underexplored. In this work, we investigate how such stochasticity affects the stability of projections and present a novel DR technique, GhostUMAP, to measure the pointwise instability of projections. Our idea is to introduce clones of data points, \"ghosts\", into UMAP's layout optimization process. Ghosts are designed to be completely passive: they do not affect any others but are influenced by attractive and repulsive forces from the original data points. After a single optimization run, GhostUMAP can capture the projection instability of data points by measuring the variance with the projected positions of their ghosts. We also present a successive halving technique to reduce the computation of GhostUMAP. Our results suggest that GhostUMAP can reveal unstable data points with a reasonable computational overhead.","accessible_pdf":false,"authors":[{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"mw.jung@skku.edu","is_corresponding":true,"name":"Myeongwon Jung"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"takanori.fujiwara@liu.se","is_corresponding":false,"name":"Takanori Fujiwara"},{"affiliations":["Sungkyunkwan University, Suwon, Korea, Republic of"],"email":"jmjo@skku.edu","is_corresponding":false,"name":"Jaemin Jo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Myeongwon Jung"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1065","time_end":"","time_stamp":"","time_start":"","title":"GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction","uid":"v-short-1065","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Integrating textual content, such as titles, annotations, and captions, with visualizations facilitates comprehension and takeaways during data exploration. Yet current tools often lack mechanisms for integrating meaningful text with visual data. This paper introduces DASH, a bimodal data exploration tool that supports integrating semantic levels into the interactive process of visualization and text-based analysis. DASH operationalizes a modified version of Lundgard et al.'s semantic hierarchy model that categorizes data descriptions into four levels ranging from basic encodings to high-level insights. By leveraging this structured semantic level framework and a large language model's text generation capabilities, DASH enables the creation of data-driven narratives via drag-and-drop user interaction. Through a preliminary user evaluation, we discuss the utility of DASH's text and chart integration capabilities when participants perform data exploration with the tool. Based on the study's feedback and observations, we discuss implications for designing unified text and chart authoring tools.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"bromley.denny@gmail.com","is_corresponding":true,"name":"Dennis Bromley"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dennis Bromley"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1068","time_end":"","time_stamp":"","time_start":"","title":"DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations","uid":"v-short-1068","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Recent advancements in vision models have significantly enhanced their ability to perform complex chart understanding tasks, such as chart captioning and chart question answering. However, assessing how these models process charts remains challenging. Existing benchmarks only coarsely evaluate how well the model performs the given task without thoroughly evaluating the underlying mechanisms that drive performance, such as how models extract image embeddings. This gap limits our understanding of the model's perceptual capabilities regarding fundamental graphical components. Therefore, we introduce a novel evaluation framework designed to assess the graphical perception of image embedding models. In the context of chart comprehension, we examine two main aspects of channel effectiveness: accuracy and discriminability of various visual channels. We first assess channel accuracy through the linearity of embeddings, which is the degree to which the perceived magnitude is proportional to the size of the stimulus. % based on the assumption that perceived magnitude should be proportional to the size of Conversely, distances between embeddings serve as a measure of discriminability; embeddings that are far apart can be considered discriminable. Our experiments on a general image embedding model, CLIP, provided that it perceives channel accuracy differently from humans and demonstrated distinct discriminability in specific channels such as length, tilt, and curvature. We aim to extend our work as a more general benchmark for reliable visual encoders and enhance a model for two distinctive goals for future applications: precise chart comprehension and mimicking human perception.","accessible_pdf":false,"authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"dtngus0111@gmail.com","is_corresponding":true,"name":"Soohyun Lee"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jangsus1@snu.ac.kr","is_corresponding":false,"name":"Minsuk Chang"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"shpark@hcil.snu.ac.kr","is_corresponding":false,"name":"Seokhyeon Park"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Soohyun Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1072","time_end":"","time_stamp":"","time_start":"","title":"Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness","uid":"v-short-1072","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data visualizations are reaching global audiences. As people who use Right-to-left (RTL) scripts constitute over a billion potential data visualization users, a need emerges to investigate how visualizations are communicated to them. Web design guidelines exist to assist designers in adapting different reading directions, yet we lack a similar standard for visualization design. This paper investigates the design patterns of visualizations with RTL scripts. We collected 128 visualizations from data-driven articles published in Arabic news outlets and analyzed their chart composition, textual elements, and sources. Our analysis suggests that designers tend to apply RTL approaches more frequently for categorical data. In other situations, we observed a mix of Left-to-right (LTR) and RTL approaches for chart directions and structures, sometimes inconsistently utilized within the same article. We reflect on this lack of clear guidelines for RTL data visualizations and derive implications for visualization authoring tools and future research directions.","accessible_pdf":false,"authors":[{"affiliations":["University College London, London, United Kingdom","UAE University , Al Ain, United Arab Emirates"],"email":"muna.alebri.19@ucl.ac.uk","is_corresponding":true,"name":"Muna Alebri"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ntrakotondravony@wpi.edu","is_corresponding":false,"name":"No\u00eblle Rakotondravony"},{"affiliations":["Worcester Polytechnic Institute, Worcester, United States"],"email":"ltharrison@wpi.edu","is_corresponding":false,"name":"Lane Harrison"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Muna Alebri"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1078","time_end":"","time_stamp":"","time_start":"","title":"Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content","uid":"v-short-1078","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Image datasets serve as the foundation for machine learning models in computer vision, significantly influencing model capabilities, performance, and biases alongside architectural considerations. Therefore, understanding the composition and distribution of these datasets has become increasingly crucial. To address the need for intuitive exploration of these datasets, we propose AEye, an extensible and scalable visualization tool tailored to image datasets. AEye utilizes a contrastively trained model to embed images into semantically meaningful high-dimensional representations, facilitating data clustering and organization. To visualize the high-dimensional representations, we project them onto a two-dimensional plane and arrange images in layers so users can seamlessly navigate and explore them interactively. Furthermore, AEye facilitates semantic search functionalities for both text and image queries, enabling users to search for content. We open-source the codebase for AEye, and provide a simple configuration to add additional datasets.","accessible_pdf":false,"authors":[{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"fgroetschla@ethz.ch","is_corresponding":false,"name":"Florian Gr\u00f6tschla"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"lanzendoerfer@ethz.ch","is_corresponding":false,"name":"Luca A Lanzend\u00f6rfer"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"mcalzavara@student.ethz.ch","is_corresponding":false,"name":"Marco Calzavara"},{"affiliations":["ETH Zurich, Zurich, Switzerland"],"email":"wattenhofer@ethz.ch","is_corresponding":false,"name":"Roger Wattenhofer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Florian Gr\u221a\u2202tschla"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1079","time_end":"","time_stamp":"","time_start":"","title":"AEye: A Visualization Tool for Image Datasets","uid":"v-short-1079","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Sine illusion happens when the more quickly changing pairs of lines lead to bigger underestimates of the delta between them. We evaluate three visual manipulations on mitigating sine illusions: dotted lines, aligned gridlines, and offset gridlines via a user study. We asked participants to compare the deltas between two lines at two time points and found aligned gridlines to be the most effective in mitigating sine illusions. Using data from the user study, we produced a model that predicts the impact of the sine illusion in line charts by accounting for the ratio of the vertical distance between the two points of comparison. When the ratio is less than 50\\%, participants begin to be influenced by the sine illusion. This effect can be significantly exacerbated when the difference between the two deltas falls under 30\\%. We compared two explanations for the sine illusion based on our data: either participants were mistakenly using the perpendicular distance between the two lines to make their comparison (the perpendicular explanation), or they incorrectly relied on the length of the line segment perpendicular to the angle bisector of the bottom and top lines (the equal triangle explanation). We found the equal triangle explanation to be the more predictive model explaining participant behaviors.","accessible_pdf":false,"authors":[{"affiliations":["Google LLC, San Francisco, United States"],"email":"cknit1999@gmail.com","is_corresponding":false,"name":"Clayton J Knittel"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"jawuah3@gatech.edu","is_corresponding":false,"name":"Jane Awuah"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"franconeri@northwestern.edu","is_corresponding":false,"name":"Steven L Franconeri"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"cxiong@gatech.edu","is_corresponding":true,"name":"Cindy Xiong Bearfield"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Cindy Xiong Bearfield"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1081","time_end":"","time_stamp":"","time_start":"","title":"Gridlines Mitigate Sine Illusion in Line Charts","uid":"v-short-1081","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In healthcare, AI techniques are widely used for tasks like risk assessment and anomaly detection. Despite AI's potential as a valuable assistant, its role in complex medical data analysis often oversimplifies human-AI collaboration dynamics. To address this, we collaborated with a local hospital, engaging six physicians and one data scientist in a formative study. From this collaboration, we propose a framework integrating two-phase interactive visualization systems: one for Human-Led, AI-Assisted Retrospective Analysis and another for AI-Mediated, Human-Reviewed Iterative Modeling. This framework aims to enhance understanding and discussion around effective human-AI collaboration in healthcare.","accessible_pdf":false,"authors":[{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"ouyy@shanghaitech.edu.cn","is_corresponding":true,"name":"Yang Ouyang"},{"affiliations":["University of Illinois at Urbana-Champaign, Champaign, United States","University of Illinois at Urbana-Champaign, Champaign, United States"],"email":"zhang414@illinois.edu","is_corresponding":false,"name":"Chenyang Zhang"},{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"wanghe1@shanghaitech.edu.cn","is_corresponding":false,"name":"He Wang"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"15301050137@fudan.edu.cn","is_corresponding":false,"name":"Tianle Ma"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"cjiang_fdu@yeah.net","is_corresponding":false,"name":"Chang Jiang"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"522649732@qq.com","is_corresponding":false,"name":"Yuheng Yan"},{"affiliations":["Zhongshan Hospital Fudan University, Shanghai, China","Zhongshan Hospital Fudan University, Shanghai, China"],"email":"yan.zuoqin@zs-hospital.sh.cn","is_corresponding":false,"name":"Zuoqin Yan"},{"affiliations":["Hong Kong University of Science and Technology, Hong Kong, Hong Kong","Hong Kong University of Science and Technology, Hong Kong, Hong Kong"],"email":"mxj@cse.ust.hk","is_corresponding":false,"name":"Xiaojuan Ma"},{"affiliations":["Southeast University, Nanjing, China","Southeast University, Nanjing, China"],"email":"cshiag@connect.ust.hk","is_corresponding":false,"name":"Chuhan Shi"},{"affiliations":["ShanghaiTech University, Shanghai, China","ShanghaiTech University, Shanghai, China"],"email":"liquan@shanghaitech.edu.cn","is_corresponding":false,"name":"Quan Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yang Ouyang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1089","time_end":"","time_stamp":"","time_start":"","title":"A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling","uid":"v-short-1089","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualizing high dimensional data is challenging, since any dimensionality reduction technique will distort distances. A classic method in cartography\u2013Tissot\u2019s Indicatrix, specific to sphere-to-plane maps\u2013visualizes distortion using ellipses. Inspired by this idea, we describe the hypertrix: a method for representing distortions that occur when data is projected from arbitrarily high dimensions onto a 2D plane. We demonstrate our technique through synthetic and real-world datasets, and describe how this indicatrix can guide interpretations of nonlinear dimensionality reduction","accessible_pdf":false,"authors":[{"affiliations":["Harvard University, Boston, United States"],"email":"sraval@g.harvard.edu","is_corresponding":true,"name":"Shivam Raval"},{"affiliations":["Harvard University, Cambridge, United States","Google Research, Cambridge, United States"],"email":"viegas@google.com","is_corresponding":false,"name":"Fernanda Viegas"},{"affiliations":["Harvard University, Cambridge, United States","Google Research, Cambridge, United States"],"email":"wattenberg@gmail.com","is_corresponding":false,"name":"Martin Wattenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shivam Raval"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1090","time_end":"","time_stamp":"","time_start":"","title":"Hypertrix: An indicatrix for high-dimensional visualizations","uid":"v-short-1090","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Coordinated multiple views (CMV) in a visual analytics system can help users explore multiple data representations simultaneously with linked interactions. However, the implementation of coordinated multiple views can be challenging. Without standard software libraries, visualization designers need to re-implement CMV during the development of each system. We introduce use-coordination, a grammar and software library that supports the efficient implementation of CMV. The grammar defines a JSON-based representation for an abstract coordination model from the information visualization literature. We contribute an optional extension to the model and grammar that allows for hierarchical coordination. Through three use cases, we show that use-coordination enables implementation of CMV in systems containing not only basic statistical charts but also more complex visualizations such as medical imaging volumes. We describe six software extensions, including a graphical editor for manipulation of coordination, which showcase the potential to build upon our coordination-focused declarative approach.","accessible_pdf":false,"authors":[{"affiliations":["Harvard Medical School, Boston, United States"],"email":"mark_keller@hms.harvard.edu","is_corresponding":true,"name":"Mark S Keller"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"trevor_manz@g.harvard.edu","is_corresponding":false,"name":"Trevor Manz"},{"affiliations":["Harvard Medical School, Boston, United States"],"email":"nils@hms.harvard.edu","is_corresponding":false,"name":"Nils Gehlenborg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mark S Keller"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1096","time_end":"","time_stamp":"","time_start":"","title":"Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views","uid":"v-short-1096","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization tools now commonly present automated insights highlighting salient data patterns, including correlations, distributions, outliers, and differences, among others. While these insights are valuable for data exploration and chart interpretation, users currently only have a binary choice of accepting or rejecting them, lacking the flexibility to refine the system logic or customize the insight generation process. To address this limitation, we present GROOT, a prototype system that allows users to proactively specify and refine automated data insights. The system allows users to directly manipulate chart elements to receive insight recommendations based on their selections. Additionally, GROOT provides users with a manual editing interface to customize, reconfigure, or add new insights to individual charts and propagate them to future explorations. We describe a usage scenario to illustrate how these features collectively support insight editing and configuration, and discuss opportunities for future work including incorporating LLMs, improving semantic data and visualization search, and supporting insight management.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, College Park, United States","Tableau Research, Seattle, United States"],"email":"sgathani@cs.umd.edu","is_corresponding":true,"name":"Sneha Gathani"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"amcrisan@uwaterloo.ca","is_corresponding":false,"name":"Anamaria Crisan"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"arjun.srinivasan.10@gmail.com","is_corresponding":false,"name":"Arjun Srinivasan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sneha Gathani"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1097","time_end":"","time_stamp":"","time_start":"","title":"Groot: An Interface for Editing and Configuring Automated Data Insights","uid":"v-short-1097","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Confidence scores of automatic speech recognition (ASR) outputs are often inadequately communicated, preventing its seamless integration into analytical workflows. In this paper, we introduce ConFides, a visual analytic system developed in collaboration with intelligence analysts to address this issue. ConFides aims to aid exploration and post-AI-transcription editing by visually representing the confidence associated with the transcription. We demonstrate how our tool can assist intelligence analysts who use ASR outputs in their analytical and exploratory tasks and how it can help mitigate misinterpretation of crucial information. We also discuss opportunities for improving textual data cleaning and model transparency for human-machine collaboration.","accessible_pdf":false,"authors":[{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"sha@wustl.edu","is_corresponding":true,"name":"Sunwoo Ha"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"chaelim@wustl.edu","is_corresponding":false,"name":"Chaehun Lim"},{"affiliations":["Smith College, Northampton, United States"],"email":"jcrouser@smith.edu","is_corresponding":false,"name":"R. Jordan Crouser"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"alvitta@wustl.edu","is_corresponding":false,"name":"Alvitta Ottley"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sunwoo Ha"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1100","time_end":"","time_stamp":"","time_start":"","title":"ConFides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration","uid":"v-short-1100","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Color coding, a technique assigning specific colors to different information types, has proven advantages in aiding human cognitive activities, especially reading and comprehension. The rise of Large Language Models (LLMs) has streamlined document coding, enabling simple automatic text labeling with various schemes. This has the potential to make color-coding more accessible and benefit more users. However, the importance of color choice, particularly in aiding textual information seeking through various color schemes, is not well studied. This paper presents a user study assessing the effectiveness of various color schemes generated by different base colors for readers' information-seeking performance in text documents color-coded by LLMs. Participants performed information-seeking tasks within scholarly papers' abstracts, each coded with a different scheme under time constraints. Results showed that non-analogous color schemes lead to better information-seeking performance, in both accuracy and response time. Yellow-inclusive color schemes lead to shorter response times and are also preferred by most participants. These could inform the better choice of color scheme for annotating text documents. As LLMs advance document coding, we advocate for more research focusing on the \"color\" aspect of color-coding techniques.","accessible_pdf":false,"authors":[{"affiliations":["Pennsylvania State University, University Park, United States"],"email":"samnghoyin@gmail.com","is_corresponding":true,"name":"Ho Yin Ng"},{"affiliations":["Pennsylvania State University, University Park, United States"],"email":"zmh5268@psu.edu","is_corresponding":false,"name":"Zeyu He"},{"affiliations":["Pennsylvania State University, University Park , United States"],"email":"txh710@psu.edu","is_corresponding":false,"name":"Ting-Hao Kenneth Huang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ho Yin Ng"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1101","time_end":"","time_stamp":"","time_start":"","title":"What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?","uid":"v-short-1101","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Homophily refers to the tendency of individuals to associate with others who are similar to them in characteristics, such as, race, ethnicity, age, gender, or interests. In this paper, we investigate if individuals exhibit racial homophily when viewing visualizations, using mass shooting data in the United States as the example topic. We conducted a crowdsourced experiment (N=450) where each participant was shown a visualization displaying the counts of mass shooting victims, highlighting the counts for one of three racial groups (White, Black, or Hispanic). Participants were assigned to view visualizations highlighting their own race or a different race to assess the influence of racial concordance on changes in affect (emotion) and attitude towards gun control. While we did not find evidence of homophily, the results showed a significant negative shift in affect across all visualization conditions. Notably, political ideology significantly impacted changes in affect, with more liberal views correlating with a more negative affect change. Our findings underscore the complexity of reactions to mass shooting visualizations and highlight the need for additional measures for understanding homophily in visualizations.","accessible_pdf":false,"authors":[{"affiliations":["New York University, Brooklyn, United States"],"email":"pt2393@nyu.edu","is_corresponding":true,"name":"Poorna Talkad Sukumar"},{"affiliations":["New York University, Brooklyn, United States"],"email":"mporfiri@nyu.edu","is_corresponding":false,"name":"Maurizio Porfiri"},{"affiliations":["New York University, New York, United States"],"email":"onov@nyu.edu","is_corresponding":false,"name":"Oded Nov"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Poorna Talkad Sukumar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1109","time_end":"","time_stamp":"","time_start":"","title":"Connections Beyond Data: Exploring Homophily With Visualizations","uid":"v-short-1109","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"As visualization literacy and its implications gain prominence, we need effective methods to teach and prepare students for the variety of visualizations they might encounter in an increasingly data-driven world. Recently, the potential of comics has been recognized in various data visualization contexts, including educational settings. In this paper, we describe the development of a workshop in which we use our \u201ccomic construction kit\u201d as a tool for students to understand various data visualization techniques through an interactive creative approach of creating explanatory comics. We report on our insights and learnings from holding eight workshops with high school students, high school teachers, university students, and university lecturers, aiming to enhance the landscape of hands-on visualization activities that can enrich the visualization classroom. The comic construction kit and all supplemental materials are open source under a CC-BY license and available at https://fhstp.github.io/comixplain/vis4schools.html.","accessible_pdf":false,"authors":[{"affiliations":["St. P\u00f6lten University of Applied Sciences, St. P\u00f6lten, Austria"],"email":"magdalena.boucher@fhstp.ac.at","is_corresponding":true,"name":"Magdalena Boucher"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"christina.stoiber@fhstp.ac.at","is_corresponding":false,"name":"Christina Stoiber"},{"affiliations":["School of Informatics, Communications and Media, Hagenberg im M\u00fchlkreis, Austria"],"email":"mandy.keck@fh-hagenberg.at","is_corresponding":false,"name":"Mandy Keck"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"victor.oliveira@fhstp.ac.at","is_corresponding":false,"name":"Victor Adriel de Jesus Oliveira"},{"affiliations":["St. Poelten University of Applied Sciences, St. Poelten, Austria"],"email":"wolfgang.aigner@fhstp.ac.at","is_corresponding":false,"name":"Wolfgang Aigner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Magdalena Boucher"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1114","time_end":"","time_stamp":"","time_start":"","title":"The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations","uid":"v-short-1114","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualizations support rapid analysis of scientific datasets, allowing viewers to glean aggregate information (e.g., the mean) within split-seconds. While prior research has explored this ability in conventional charts, it is unclear if spatial visualizations used by computational scientists afford a similar ensemble perception capacity. We investigate people's ability to estimate two summary statistics, mean and variance, from pseudocolor scalar fields. In a crowdsourced experiment, we find that participants can reliably characterize both statistics, although variance discrimination requires a much stronger signal. Multi-hue and diverging colormaps outperformed monochromatic, luminance ramps in aiding this extraction. Analysis of qualitative responses suggests that participants often estimate the distribution of hotspots and valleys as visual proxies for data statistics. These findings suggest that people's summary interpretation of spatial datasets is likely driven by the appearance of discrete color segments, rather than assessments of overall luminance. Implicit color segmentation in quantitative displays could thus prove more useful than previously assumed by facilitating quick, gist-level judgments about color-coded visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"vmateevitsi@anl.gov","is_corresponding":false,"name":"Victor A. Mateevitsi"},{"affiliations":["Argonne National Laboratory, Lemont, United States","University of Illinois Chicago, Chicago, United States"],"email":"papka@anl.gov","is_corresponding":false,"name":"Michael E. Papka"},{"affiliations":["Indiana University, Indianapolis, United States"],"email":"redak@iu.edu","is_corresponding":true,"name":"Khairi Reda"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Khairi Reda"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1116","time_end":"","time_stamp":"","time_start":"","title":"Science in a Blink: Supporting Ensemble Perception in Scalar Fields","uid":"v-short-1116","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Geovisualizations are powerful tools for exploratory spatial analysis, enabling sighted users to discern patterns, trends, and relationships within geographic data. However, these visual tools have remained largely inaccessible to screen-reader users. We present AltGeoViz, a new system we designed to facilitate geovisualization exploration for these users. AltGeoViz dynamically generates alt-text descriptions based on the user's current map view, providing summaries of spatial patterns and descriptive statistics. In a study of five screen-reader users, we found that AltGeoViz enabled them to interact with geovisualizations in previously infeasible ways. Participants demonstrated a clear understanding of data summaries and their location context, and they could synthesize spatial understandings of their explorations. Moreover, we identified key areas for improvement, such as the addition of intuitive spatial navigation controls and comparative analysis features.","accessible_pdf":false,"authors":[{"affiliations":["University of Washington, Seattle, United States"],"email":"chuchuli@cs.washington.edu","is_corresponding":true,"name":"Chu Li"},{"affiliations":["University of Washington, Seattle, United States"],"email":"ypang2@cs.washington.edu","is_corresponding":false,"name":"Rock Yuren Pang"},{"affiliations":["University of Washington, Seattle, United States"],"email":"asharif@cs.washington.edu","is_corresponding":false,"name":"Ather Sharif"},{"affiliations":["University of Washington, Seattle, United States"],"email":"chheda@cs.washington.edu","is_corresponding":false,"name":"Arnavi Chheda-Kothary"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jheer@uw.edu","is_corresponding":false,"name":"Jeffrey Heer"},{"affiliations":["University of Washington, Seattle, United States"],"email":"jonf@cs.uw.edu","is_corresponding":false,"name":"Jon E. Froehlich"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chu Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1117","time_end":"","time_stamp":"","time_start":"","title":"AltGeoViz: Facilitating Accessible Geovisualization","uid":"v-short-1117","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Analyzing uncertainty in spatial data is a vital task in many domains, as for example with climate and weather simulation ensembles. Although there are many methods to support the analysis of the uncertainty, such as uncertain isocontours or calculation of statistical values, it is still a challenge to get an overview of the uncertainty and then decide a further method or parameter to analyze the data, or investigate further some region or point of interest. We present cumulative height fields, a visualization method for 2D scalar field ensembles using the marginal empirical distribution function and show preliminary results using volume rendering and slicing for the Max Planck Institute Grand Ensemble.","accessible_pdf":false,"authors":[{"affiliations":["Institute of Computer Science, Leipzig University, Leipzig, Germany"],"email":"daetz@informatik.uni-leipzig.de","is_corresponding":true,"name":"Tomas Rodolfo Daetz Chacon"},{"affiliations":["German Climate Computing Center (DKRZ), Hamburg, Germany"],"email":"boettinger@dkrz.de","is_corresponding":false,"name":"Michael B\u00f6ttinger"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"scheuermann@informatik.uni-leipzig.de","is_corresponding":false,"name":"Gerik Scheuermann"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"heine@informatik.uni-leipzig.de","is_corresponding":false,"name":"Christian Heine"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tomas Rodolfo Daetz Chacon"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1119","time_end":"","time_stamp":"","time_start":"","title":"Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function","uid":"v-short-1119","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Many real-world networks contain structurally-equivalent nodes. These are defined as vertices that share the same set of neighboring nodes, making them interchangeable with a traditional graph layout approach. However, many real-world graphs also have properties associated with nodes, adding additional meaning to them. We present an approach for swapping locations of structurally-equivalent nodes in graph layout so that those with more similar properties have closer proximity to each other. This improves the usefulness of the visualization from an attribute perspective without negatively impacting the visualization from a structural perspective. We include an algorithm for finding these sets of nodes in linear time, as well as methodologies for ordering nodes based on their attribute similarity, which works for scalar, ordinal, multidimensional, and categorical data.","accessible_pdf":false,"authors":[{"affiliations":["Pacific Northwest National Lab, Richland, United States"],"email":"patrick.mackey@pnnl.gov","is_corresponding":true,"name":"Patrick Mackey"},{"affiliations":["University of Arizona, Tucson, United States","Pacific Northwest National Laboratory, Richland, United States"],"email":"jacobmiller1@arizona.edu","is_corresponding":false,"name":"Jacob Miller"},{"affiliations":["Pacific Northwest National Laboratory, Richland, United States"],"email":"liz.f@pnnl.gov","is_corresponding":false,"name":"Liz Faultersack"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Patrick Mackey"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1121","time_end":"","time_stamp":"","time_start":"","title":"Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes","uid":"v-short-1121","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Psychological research often involves understanding psychological constructs through conducting factor analysis on data collected by a questionnaire, which can comprise hundreds of questions. Without interactive systems for interpreting factor models, researchers are frequently exposed to subjectivity, potentially leading to misinterpretations or overlooked crucial information. This paper introduces FAVis, a novel interactive visualization tool designed to aid researchers in interpreting and evaluating factor analysis results. FAVis enhances the understanding of relationships between variables and factors by supporting multiple views for visualizing factor loadings and correlations, allowing users to analyze information from various perspectives. The primary feature of FAVis is to enable users to set optimal thresholds for factor loadings to balance clarity and information retention. FAVis also allows users to assign tags to variables, enhancing the understanding of factors by linking them to their associated psychological constructs. We conduct a case study on a dataset from the Motivational State Questionnaire, utilizing a three-factor common factor model. Our user study demonstrates the utility of FAVis in various tasks.","accessible_pdf":false,"authors":[{"affiliations":["University of Notre Dame, Notre Dame, United States","University of Notre Dame, Notre Dame, United States"],"email":"ylu22@nd.edu","is_corresponding":true,"name":"Yikai Lu"},{"affiliations":["University of Notre Dame, Notre Dame, United States"],"email":"chaoli.wang@nd.edu","is_corresponding":false,"name":"Chaoli Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yikai Lu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1126","time_end":"","time_stamp":"","time_start":"","title":"FAVis: Visual Analytics of Factor Analysis for Psychological Research","uid":"v-short-1126","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In this paper, we analyze the Apple Vision Pro hardware and the visionOS software platform, assessing their capabilities for volume rendering of structured grids, a prevalent technique across various applications. The Apple Vision Pro supports multiple display modes, from classical augmented reality (AR) using video see-through technology to immersive virtual reality (VR) environments that exclusively render virtual objects. These modes utilize different APIs and exhibit distinct capabilities. Our focus is on direct volume rendering, selected for its implementation challenges due to the native graphics APIs being predominantly oriented towards surface shading. Volume rendering is particularly vital in fields where AR and VR visualizations offer substantial benefits, such as in medicine and manufacturing. Despite its initial high cost, we anticipate that the Vision Pro will become more accessible and affordable over time, following Apple's track record of market expansion. As these devices become more prevalent, understanding how to effectively program and utilize them becomes increasingly important, offering significant opportunities for innovation and practical applications in various sectors.","accessible_pdf":false,"authors":[{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"camilla.hrycak@uni-due.de","is_corresponding":true,"name":"Camilla Hrycak"},{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"david.lewakis@stud.uni-due.de","is_corresponding":false,"name":"David Lewakis"},{"affiliations":["University of Duisburg-Essen, Duisburg, Germany"],"email":"jens.krueger@uni-due.de","is_corresponding":false,"name":"Jens Harald Krueger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Camilla Hrycak"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1127","time_end":"","time_stamp":"","time_start":"","title":"Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization","uid":"v-short-1127","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization, from simple line plots to complex high-dimensional visual analysis systems, has established itself throughout numerous domains to explore, analyze, and evaluate data. Applying such visualizations in the context of simulation science where High-Performance Computing (HPC) produces ever-growing amounts of data that is more complex, potentially multidimensional, and multi-modal, takes up resources and a high level of technological experience often not available to domain experts. In this work, we present DaVE - a curated database of visualization examples, which aims to provide state-of-the-art and advanced visualization methods that arise in the context of HPC applications. Based on domain- or data-specific descriptors entered by the user, DaVE provides a list of appropriate visualization techniques, each accompanied by descriptions, examples, references, and resources. Sample code, adaptable container templates, and recipes for easy integration in HPC applications can be downloaded for easy access to high-fidelity visualizations. While the database is currently filled with a limited number of entries based on a broad evaluation of needs and challenges of current HPC users, DaVE is designed to be easily extended by experts from both the visualization and HPC communities.","accessible_pdf":false,"authors":[{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"koenen@informatik.rwth-aachen.de","is_corresponding":true,"name":"Jens Koenen"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"m.petersen@rptu.de","is_corresponding":false,"name":"Marvin Petersen"},{"affiliations":["RPTU Kaiserslautern-Landau, Kaiserslautern, Germany"],"email":"garth@rptu.de","is_corresponding":false,"name":"Christoph Garth"},{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"gerrits@vis.rwth-aachen.de","is_corresponding":false,"name":"Tim Gerrits"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jens Koenen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1130","time_end":"","time_stamp":"","time_start":"","title":"DaVE - A Curated Database of Visualization Examples","uid":"v-short-1130","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Humans struggle to perceive and interpret high-dimensional data. Therefore, high-dimensional data are often projected into two dimensions for visualization. Many applications benefit from complex nonlinear dimensionality reduction techniques, but the effects of individual high-dimensional features are hard to explain in the two-dimensional space. Most visualization solutions use multiple two-dimensional plots, each showing the effect of one high-dimensional feature in two dimensions; this approach creates a need for a visual inspection of k plots for a k-dimensional input space. Our solution, Feature Clock, provides a novel approach that eliminates the need to inspect these k plots to grasp the influence of original features on the data structure depicted in two dimensions. Feature Clock enhances the explainability and compactness of visualizations of embedded data and is available in an open-source Python library.","accessible_pdf":false,"authors":[{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"ovcharenko.folga@gmail.com","is_corresponding":true,"name":"Olga Ovcharenko"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"rita.sevastjanova@uni-konstanz.de","is_corresponding":false,"name":"Rita Sevastjanova"},{"affiliations":["ETH Zurich, Z\u00fcrich, Switzerland"],"email":"valentina.boeva@inf.ethz.ch","is_corresponding":false,"name":"Valentina Boeva"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Olga Ovcharenko"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1135","time_end":"","time_stamp":"","time_start":"","title":"Feature Clock: High-Dimensional Effects in Two-Dimensional Plots","uid":"v-short-1135","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Reconstruction of 3D scenes from 2D images is a technical challenge that impacts domains from Earth and planetary sciences and space exploration to augmented and virtual reality. Typically, reconstruction algorithms first identify common features across images and then minimize reconstruction errors after estimating the shape of the terrain. This bundle adjustment (BA) step optimizes around a single, simplifying scalar value that obfuscates many possible causes of reconstruction errors (e.g., initial estimate of the position and orientation of the camera, lighting conditions, ease of feature detection in the terrain). Reconstruction errors can lead to inaccurate scientific inferences or endanger a spacecraft exploring a remote environment. To address this challenge, we present VECTOR, a visual analysis tool that improves error inspection for stereo reconstruction BA. VECTOR provides analysts with previously unavailable visibility into feature locations, camera pose, and computed 3D points. VECTOR was developed in partnership with the Perseverance Mars Rover and Ingenuity Mars Helicopter terrain reconstruction team at the NASA Jet Propulsion Laboratory. We report on how this tool was used to debug and improve terrain reconstruction for the Mars 2020 mission.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, Boston, United States"],"email":"racquel.fygenson@gmail.com","is_corresponding":false,"name":"Racquel Fygenson"},{"affiliations":["Weta FX, Auckland, New Zealand"],"email":"kjawad@andrew.cmu.edu","is_corresponding":false,"name":"Kazi Jawad"},{"affiliations":["Art Center, Pasadena, United States"],"email":"zongzhanisabelli@gmail.com","is_corresponding":false,"name":"Zongzhan Li"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"francois.ayoub@jpl.nasa.gov","is_corresponding":false,"name":"Francois Ayoub"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"bob.deen@jpl.nasa.gov","is_corresponding":false,"name":"Robert G Deen"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"sd@scottdavidoff.com","is_corresponding":false,"name":"Scott Davidoff"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["NASA-JPL, Pasadena, United States"],"email":"mauricio.a.hess.flores@jpl.nasa.gov","is_corresponding":true,"name":"Mauricio Hess-Flores"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mauricio Hess-Flores"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1144","time_end":"","time_stamp":"","time_start":"","title":"Opening the black box of 3D reconstruction error analysis with VECTOR","uid":"v-short-1144","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Millions of runners rely on smart watches that display running-related metrics such as pace, heart rate and distance for training and racing -- mostly with text and numbers. Although research tells us that visualizations are a good alternative to text on smart watches, we know little about how visualizations can help in realistic running scenarios. We conducted a study in which 20 runners completed running-related tasks on an outdoor track using both text and visualizations. Our results show that runners are 1.5 to 8 times faster in completing those tasks with visualizations than with text, prefer visualizations to text, and would use such visualizations while running -- were they available on their smart watch.","accessible_pdf":false,"authors":[{"affiliations":["University of Victoria, Victoria, Canada"],"email":"sarinaksj@uvic.ca","is_corresponding":false,"name":"Sarina Kashanj"},{"affiliations":["University of Victoria, Victoira, Canada","Delft University of Technology, Delft, Netherlands"],"email":"xiyao.wang23@gmail.com","is_corresponding":false,"name":"Xiyao Wang"},{"affiliations":["University of Victoria, Victoria, Canada"],"email":"cperin@uvic.ca","is_corresponding":true,"name":"Charles Perin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Charles Perin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1146","time_end":"","time_stamp":"","time_start":"","title":"Visualizations on Smart Watches while Running: It Actually Helps!","uid":"v-short-1146","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Exploratory visual data analysis tools empower data analysts to efficiently and intuitively explore data insights throughout the entire analysis cycle. However, the gap between common programmatic analysis (e.g., within computational notebooks) and exploratory visual analysis leads to a disjointed and inefficient data analysis experience. To bridge this gap, we developed PyGWalker, a Python library that offers on-the-fly assistance for exploratory visual data analysis. It features a lightweight and intuitive GUI with a shelf builder modality. Its loosely coupled architecture supports multiple computational environments to accommodate varying data sizes. Since its release in February 2023, PyGWalker has gained much attention, with 468k downloads on PyPI and over 9.8k stars on GitHub as of April 2024. This demonstrates its value to the data science and visualization community, with researchers and developers integrating it into their own applications and studies.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China","Kanaries Data Inc., Hangzhou, China"],"email":"yue.yu@connect.ust.hk","is_corresponding":true,"name":"Yue Yu"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"lshenaj@connect.ust.hk","is_corresponding":false,"name":"Leixian Shen"},{"affiliations":["Kanaries Data Inc., Hangzhou, China"],"email":"feilong@kanaries.net","is_corresponding":false,"name":"Fei Long"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":["Kanaries Data Inc., Hangzhou, China"],"email":"haochen@kanaries.net","is_corresponding":false,"name":"Hao Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yue Yu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1150","time_end":"","time_stamp":"","time_start":"","title":"PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis","uid":"v-short-1150","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Augmented reality (AR) area labels can highlight real-life objects, visualize real world regions with arbitrary boundaries, and show invisible objects or features. Environment conditions such as lighting and clutter can decrease fixed or passive label visibility, and labels that have high opacity levels can occlude crucial details in the environment. We design and evaluate active AR area label visualization modes to enhance visibility across real-life environments, while still retaining environment details within the label. For this, we define a distant characteristic color from the environment in perceptual CIELAB space, then introduce spatial variations among label pixel colors based on the underlying environment variation. In a user study with 18 participants, we discovered that our active label visualization modes can be comparable in visibility to a fixed green baseline by Gabbard et al., and can outperform it with added spatial variation in cluttered environments, across varying levels of lighting (e.g., nighttime), and in environments with colors similar to the fixed baseline color.","accessible_pdf":false,"authors":[{"affiliations":["Brown University, Providence, United States"],"email":"hojung_kwon@brown.edu","is_corresponding":false,"name":"Hojung Kwon"},{"affiliations":["Brown University, Providence, United States"],"email":"yuanbo_li@brown.edu","is_corresponding":false,"name":"Yuanbo Li"},{"affiliations":["Brown University, Providence, United States"],"email":"chloe_ye2019@hotmail.com","is_corresponding":false,"name":"Xiaohan Ye"},{"affiliations":["Brown University, Providence, United States"],"email":"praccho_muna-mcquay@brown.edu","is_corresponding":false,"name":"Praccho Muna-McQuay"},{"affiliations":["Duke University, Durham, United States"],"email":"liuren.yin@duke.edu","is_corresponding":false,"name":"Liuren Yin"},{"affiliations":["Brown University, Providence, United States"],"email":"james_tompkin@brown.edu","is_corresponding":true,"name":"James Tompkin"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["James Tompkin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1155","time_end":"","time_stamp":"","time_start":"","title":"Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality","uid":"v-short-1155","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Compound graphs are networks in which vertices can be grouped into larger subsets, with these subsets capable of further grouping, resulting in a nesting that can be many levels deep. Such graphs arise in several applications including biological workflows, chemical equations, and computational data flow analysis. Common layouts prioritize the lowest level of the grouping, down to the individual ungrouped vertices, which can make the higher level grouped structures more difficult to discern, especially in deeply nested networks. We contribute an overview+detail layout that preserves the saliency of the higher level network structure when groups are expanded to show internal nested structure. Our layout draws inner structures adjacent to their parents, using a modified tree layout to place substructures. We describe our algorithm and then present case studies demonstrating the layout's utility to a domain expert working on data flow analysis. Finally, we discuss network parameters and analysis situations in which our layout is well suited.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"hatch.on27@gmail.com","is_corresponding":true,"name":"Chang Han"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"lieffers@arizona.edu","is_corresponding":false,"name":"Justin Lieffers"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"claytonm@arizona.edu","is_corresponding":false,"name":"Clayton Morrison"},{"affiliations":["The University of Utah, Salt Lake City, United States"],"email":"kisaacs@sci.utah.edu","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chang Han"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1156","time_end":"","time_stamp":"","time_start":"","title":"An Overview + Detail Layout for Visualizing Compound Graphs","uid":"v-short-1156","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"With two studies, we assess how different walking trajectories (straight line, circular, and infinity) and speeds (2 km/h, 4 km/h, and 6 km/h) influence the accuracy and response time of participants reading micro visualizations on a smartwatch. We showed our participants common watch face micro visualizations including date, time, weather information, and four complications showing progress charts of fitness data. Our findings suggest that while walking trajectories did not significantly affect reading performance, overall walking activity, especially at high speeds, hurt reading accuracy and, to some extent, response time.","accessible_pdf":false,"authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"fairouz.grioui@vis.uni-stuttgart.de","is_corresponding":true,"name":"Fairouz Grioui"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"research@blascheck.eu","is_corresponding":false,"name":"Tanja Blascheck"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"yaolijie0219@gmail.com","is_corresponding":false,"name":"Lijie Yao"},{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"petra.isenberg@inria.fr","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fairouz Grioui"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1159","time_end":"","time_stamp":"","time_start":"","title":"Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking","uid":"v-short-1159","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Digital twins are an excellent tool to model, visualize, and simulate complex systems, to understand and optimize their operation. In this work, we present the technical challenges of real-time visualization of a digital twin of the Frontier supercomputer. We show the initial prototype and current state of the twin and highlight technical design challenges of visualizing such a large High Performance Computing (HPC) system. The goal is to understand the use of augmented reality as a primary way to extract information and collaborate on digital twins of complex systems. This leverages the spatio-temporal aspect of a 3D representation of a digital twin, with the ability to view historical and real-time telemetry, triggering simulations of a system state and viewing the results, which can be augmented via dashboards for details. Finally, we discuss considerations and opportunities for augmented reality of digital twins of large-scale, parallel computers.","accessible_pdf":false,"authors":[{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"maiterthm@ornl.gov","is_corresponding":true,"name":"Matthias Maiterth"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"brewerwh@ornl.gov","is_corresponding":false,"name":"Wes Brewer"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"dewetd@ornl.gov","is_corresponding":false,"name":"Dane De Wet"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"greenwoodms@ornl.gov","is_corresponding":false,"name":"Scott Greenwood"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kumarv@ornl.gov","is_corresponding":false,"name":"Vineet Kumar"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"hinesjr@ornl.gov","is_corresponding":false,"name":"Jesse Hines"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"bouknightsl@ornl.gov","is_corresponding":false,"name":"Sedrick L Bouknight"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"wangz@ornl.gov","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Hewlett Packard Enterprise, Berkshire, United Kingdom"],"email":"tim.dykes@hpe.com","is_corresponding":false,"name":"Tim Dykes"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"fwang2@ornl.gov","is_corresponding":false,"name":"Feiyi Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Matthias Maiterth"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1161","time_end":"","time_stamp":"","time_start":"","title":"Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities","uid":"v-short-1161","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Integral curves have been widely used to represent and analyze various vector fields. Curve-based clustering and pattern search approaches are usually applied to aid the identification of meaningful patterns from large numbers of integral curves. However, they need not support an interactive, level-of-detail exploration of these patterns. To address this, we propose a Curve Segment Neighborhood Graph (CSNG) to capture the relationships between neighboring curve segments. This graph representation enables us to adapt the fast community detection algorithm, i.e., the Louvain algorithm, to identify individual graph communities from CSNG. Our results show that these communities often correspond to the features of the flow. To achieve a multi-level interactive exploration of the detected communities, we adapt a force-directed layout that allows users to refine and re-group communities based on their domain knowledge. We incorporate the proposed techniques into an interactive system to enable effective analysis and interpretation of complex patterns in large-scale integral curve datasets.","accessible_pdf":false,"authors":[{"affiliations":["University of Houston, Houston, United States"],"email":"nguyenpkk95@gmail.com","is_corresponding":true,"name":"Nguyen K Phan"},{"affiliations":["University of Houston, Houston, United States"],"email":"chengu@cs.uh.edu","is_corresponding":false,"name":"Guoning Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nguyen K Phan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1163","time_end":"","time_stamp":"","time_start":"","title":"Curve Segment Neighborhood-based Vector Field Exploration","uid":"v-short-1163","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Custom animated visualizations of large, complex datasets are helpful across many domains, but they are hard to develop. Much of the difficulty arises from maintaining visualization state across a large set of animated graphical elements that may change in number over time. We contribute Counterpoint, a framework for state management designed to help implement such visualizations in JavaScript. Using Counterpoint, developers can manipulate large collections of marks with reactive attributes that are easy to render in scalable APIs such as Canvas and WebGL. Counterpoint also helps orchestrate the entry and exit of graphical elements using the concept of a rendering \"stage.\" Through a performance evaluation, we show that Counterpoint adds minimal overhead over current high-performance rendering techniques while simplifying implementation. We also provide two examples of visualizations created using Counterpoint that illustrate its flexibility and compatibility with other visualization toolkits as well as considerations for users with disabilities. Counterpoint is open-source and available at https://github.com/cmudig/counterpoint.","accessible_pdf":false,"authors":[{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"vsivaram@andrew.cmu.edu","is_corresponding":true,"name":"Venkatesh Sivaraman"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"fje@cmu.edu","is_corresponding":false,"name":"Frank Elavsky"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"domoritz@cmu.edu","is_corresponding":false,"name":"Dominik Moritz"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"adamperer@cmu.edu","is_corresponding":false,"name":"Adam Perer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Venkatesh Sivaraman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1166","time_end":"","time_stamp":"","time_start":"","title":"Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations","uid":"v-short-1166","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualizing citation relations with network structures is widely used, but the visual complexity can make it challenging for individual researchers to navigate through them. We collected data from 18 researchers using an interface that we designed using network simplification methods and analyzed how users browsed and identified important papers. Our analysis reveals six major patterns used for identifying papers of interest, which can be categorized into three key components: Fields, Bridges, and Foundations, each viewed from two distinct perspectives: layout-oriented and connection-oriented. The connection-oriented approach was found to be more effective for selecting relevant papers, but the layout-oriented method was adopted more often, even though it led to unexpected results and user frustration. Our findings emphasize the importance of integrating these components and the necessity to balance visual layouts with meaningful connections to enhance the effectiveness of citation networks in academic browsing systems.","accessible_pdf":false,"authors":[{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"krchoe@hcil.snu.ac.kr","is_corresponding":true,"name":"Kiroong Choe"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"gracekim027@snu.ac.kr","is_corresponding":false,"name":"Eunhye Kim"},{"affiliations":["Dept. of Electrical and Computer Engineering, SNU, Seoul, Korea, Republic of"],"email":"paulmoguri@snu.ac.kr","is_corresponding":false,"name":"Sangwon Park"},{"affiliations":["Seoul National University, Seoul, Korea, Republic of"],"email":"jseo@snu.ac.kr","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kiroong Choe"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1173","time_end":"","time_stamp":"","time_start":"","title":"Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations","uid":"v-short-1173","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The proliferation of misleading visualizations online, particularly during critical events like public health crises and elections, poses a significant risk of misinformation. This work investigates the capability of GPT-4V to detect misleading visualizations. Utilizing a dataset of tweet-visualization pairs with various visual misleaders, we tested GPT-4V under four experimental conditions: naive zero-shot, naive few-shot, guided zero-shot, and guided few-shot. Our results demonstrate that GPT-4V can detect misleading visualizations with moderate accuracy without prior training (naive zero-shot) and that performance considerably improves by providing the model with the definitions of misleaders (guided zero-shot). However, combining definitions with examples of misleaders (guided few-shot) did not yield further improvements. This study underscores the feasibility of using large vision-language models such as GTP-4V to combat misinformation and emphasizes the importance of optimizing prompt engineering to enhance detection accuracy.","accessible_pdf":false,"authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"jhalexander@umass.edu","is_corresponding":false,"name":"Jason Huang Alexander"},{"affiliations":["University of Masssachusetts Amherst, Amherst, United States"],"email":"phnanda@umass.edu","is_corresponding":false,"name":"Priyal H Nanda"},{"affiliations":["Northeastern University, Boston, United States"],"email":"yangkc@iu.edu","is_corresponding":false,"name":"Kai-Cheng Yang"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"asarv@cs.umass.edu","is_corresponding":true,"name":"Ali Sarvghad"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ali Sarvghad"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1177","time_end":"","time_stamp":"","time_start":"","title":"Can GPT-4V Detect Misleading Visualizations?","uid":"v-short-1177","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"An atmospheric front is an imaginary surface that separates two distinct air masses and is commonly defined as the warm-air side of a frontal zone with high gradients of atmospheric temperature and humidity. These fronts are a widely used conceptual model in meteorology, which are often encountered in the literature as two-dimensional (2D) front lines on surface analysis charts. This paper presents a method for computing three-dimensional (3D) atmospheric fronts as surfaces that is capable of extracting continuous and well-confined features suitable for 3D visual analysis, spatio-temporal tracking, and statistical analyses. Recently developed contour-based methods for 3D front extraction rely on computing the third derivative of a moist potential temperature field. Additionally, they require the field to be smoothed to obtain continuous large-scale structures. This paper demonstrates the feasibility of an alternative method to front extraction using ridge surface computation. The proposed method requires only the sec- ond derivative of the input field and produces accurate structures even from unsmoothed data. An application of the ridge-based method to a data set corresponding to Cyclone Friederike demonstrates its benefits and utility towards visual analysis of the full 3D structure of fronts.","accessible_pdf":false,"authors":[{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"anne.gossing@fu-berlin.de","is_corresponding":true,"name":"Anne Gossing"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"andreas.beckert@uni-hamburg.de","is_corresponding":false,"name":"Andreas Beckert"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"christoph.fischer-1@uni-hamburg.de","is_corresponding":false,"name":"Christoph Fischer"},{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"klenert@zib.de","is_corresponding":false,"name":"Nicolas Klenert"},{"affiliations":["Indian Institute of Science, Bangalore, India"],"email":"vijayn@iisc.ac.in","is_corresponding":false,"name":"Vijay Natarajan"},{"affiliations":["Freie Universit\u00e4t Berlin, Berlin, Germany"],"email":"george.pacey@fu-berlin.de","is_corresponding":false,"name":"George Pacey"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"thorwin.vogt@uni-hamburg.de","is_corresponding":false,"name":"Thorwin Vogt"},{"affiliations":["Universit\u00e4t Hamburg, Hamburg, Germany"],"email":"marc.rautenhaus@uni-hamburg.de","is_corresponding":false,"name":"Marc Rautenhaus"},{"affiliations":["Zuse Institute Berlin, Berlin, Germany"],"email":"baum@zib.de","is_corresponding":false,"name":"Daniel Baum"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anne Gossing"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1183","time_end":"","time_stamp":"","time_start":"","title":"A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts","uid":"v-short-1183","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"To improve the perception of hierarchical structures in data sets, several color map generation algorithms have been proposed to take this structure into account. But the design of hierarchical color maps elicits different requirements to those of color maps for tabular data. Within this paper, we make an initial effort to put design rules from the color map literature into the context of hierarchical color maps. We investigate the impact of several design decisions and provide recommendations for various analysis scenarios. Thus, we lay the foundation for objective quality criteria to evaluate hierarchical color maps.","accessible_pdf":false,"authors":[{"affiliations":["Fraunhofer IGD, Darmstadt, Germany"],"email":"tobias.mertz@igd.fraunhofer.de","is_corresponding":true,"name":"Tobias Mertz"},{"affiliations":["Fraunhofer IGD, Darmstadt, Germany","TU Darmstadt, Darmstadt, Germany"],"email":"joern.kohlhammer@igd.fraunhofer.de","is_corresponding":false,"name":"J\u00f6rn Kohlhammer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias Mertz"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1184","time_end":"","time_stamp":"","time_start":"","title":"Towards a Quality Approach to Hierarchical Color Maps","uid":"v-short-1184","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The visualization and interactive exploration of geo-referenced networks poses challenges if the network's nodes are not evenly distributed. Our approach proposes new ways of realizing animated transitions for exploring such networks from an ego-perspective. We aim to reduce the required screen estate while maintaining the viewers' mental map of distances and directions. A preliminary study provides first insights of the comprehensiveness of animated geographic transitions regarding directional relationships between start and end point in different projections. Two use cases showcase how ego-perspective graph exploration can be supported using less screen space than previous approaches.","accessible_pdf":false,"authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"max@mumintroll.org","is_corresponding":true,"name":"Max Franke"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"samuel.beck@vis.uni-stuttgart.de","is_corresponding":false,"name":"Samuel Beck"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"steffen.koch@vis.uni-stuttgart.de","is_corresponding":false,"name":"Steffen Koch"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Max Franke"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1185","time_end":"","time_stamp":"","time_start":"","title":"Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks","uid":"v-short-1185","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data visualizations help extract insights from datasets, but reaching these insights requires decomposing high level goals into low-level analytic tasks that can be complex due to varying degrees of data literacy and visualization experience. Recent advancements in large language models (LLMs) have shown promise for lowering barriers for users to achieve tasks such as writing code and may likewise facilitate visualization insight. Scalable Vector Graphics (SVG), a text-based image format common in data visualizations, matches well with the text sequence processing of transformer-based LLMs. In this paper, we explore the capability of LLMs to perform 10 low-level visual analytic tasks defined by Amar, Eagan, and Stasko directly on SVG-based visualizations. Using zero-shot prompts, we instruct the models to provide responses or modify the SVG code based on given visualizations. Our findings demonstrate that LLMs can effectively modify existing SVG visualizations for some tasks like Cluster but perform poorly on tasks requiring mathematical operations like Compute Derived Value. We also discovered that LLM performance can vary based on factors such as the number of data points, the presence of value labels, and the chart type. Our findings contribute to gauging the general capabilities of LLMs and highlight the need for further exploration and development to fully harness their potential in supporting visual analytic tasks.","accessible_pdf":false,"authors":[{"affiliations":["Brown University, Providence, United States"],"email":"leooooxzz@gmail.com","is_corresponding":true,"name":"Zhongzheng Xu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zhongzheng Xu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1186","time_end":"","time_stamp":"","time_start":"","title":"Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations","uid":"v-short-1186","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Vortices and their analysis play a critical role in the understanding of complex phenomena in turbulent flow. Traditional vortex extraction methods, notably region-based techniques, often overlook the entanglement phenomenon, resulting in the inclusion of multiple vortices within a single extracted region. Their separation is necessary for quantifying different types of vortices and their statistics. In this study, we propose a novel vortex separation method that extends the conventional contour tree-based segmentation approach with an additional step termed \u201clayering\u201d. Upon extracting a vortical region using specified vortex criteria (e.g., \u03bb2), we initially establish topological segmentation based on the contour tree, followed by the layering process to allocate appropriate segmentation IDs to unsegmented cells, thus separating individual vortices within the region. However, these regions may still suffer from inaccurate splits, which we address statistically by leveraging the continuity of vorticity lines across the split boundaries. Our findings demonstrate a significant improvement in both the separation of vortices and the mitigation of inaccurate splits compared to prior methods.","accessible_pdf":false,"authors":[{"affiliations":["University of Houston, Houston, United States"],"email":"adeelz92@gmail.com","is_corresponding":true,"name":"Adeel Zafar"},{"affiliations":["University of Houston, Houston, United States"],"email":"zpoorsha@cougarnet.uh.edu","is_corresponding":false,"name":"Zahra Poorshayegh"},{"affiliations":["University of Houston, Houston, United States"],"email":"diyang@uh.edu","is_corresponding":false,"name":"Di Yang"},{"affiliations":["University of Houston, Houston, United States"],"email":"chengu@cs.uh.edu","is_corresponding":false,"name":"Guoning Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Adeel Zafar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1188","time_end":"","time_stamp":"","time_start":"","title":"Topological Separation of Vortices","uid":"v-short-1188","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The information visualization research community commonly produces supporting software to demonstrate technical contributions to the field. However, developing this software tends to be an overwhelming task, the final product tends to be a research prototype without much thought for modularization and re-usability which makes it harder to replicate and adopt. This paper presents a design pattern for facilitating the creation, dissemination, and re-utilization of visualization techniques using reactive widgets. The design pattern features basic concepts that leverage modern front-end development best practices and standards, which ease development and replication. The paper presents several usage examples of the pattern, templates for implementation, and even a wrapper for facilitating the conversion of any Vega specification into a reactive widget.","accessible_pdf":false,"authors":[{"affiliations":["Northeastern University, San Francisco, United States"],"email":"john.guerra@gmail.com","is_corresponding":true,"name":"John Alexis Guerra-Gomez"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["John Alexis Guerra-Gomez"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1189","time_end":"","time_stamp":"","time_start":"","title":"Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination","uid":"v-short-1189","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"To enable data-driven decision-making across organizations, data professionals need to share insights with their colleagues in context-appropriate communication channels. Many of their colleagues rely on data but are not themselves analysts; furthermore, their colleagues are reluctant or unable to use dedicated analytical applications or dashboards, and they expect communication to take place within threaded collaboration platforms such as Slack or Microsoft Teams. In this paper, we introduce a set of six strategies for adapting content from business intelligence (BI) dashboards into appropriate formats for sharing on collaboration platforms, formats that we refer to as dashboard snapshots. Informed by prior studies of enterprise communication around data, these strategies go beyond redesigning or restyling by considering varying levels of data literacy across an organization, introducing affordances for self-service question-answering, and anticipating the post-sharing lifecycle of data artifacts. These strategies involve the use of templates that are matched to common communicative intents, serving to reduce the workload of data professionals. We contribute a formal representation of these strategies and demonstrate their applicability in a comprehensive enterprise communication scenario featuring multiple stakeholders that unfolds over the span of months.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"hyeokkim2024@u.northwestern.edu","is_corresponding":true,"name":"Hyeok Kim"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"arjun.srinivasan.10@gmail.com","is_corresponding":false,"name":"Arjun Srinivasan"},{"affiliations":["Tableau Research, Seattle, United States"],"email":"mbrehmer@uwaterloo.ca","is_corresponding":false,"name":"Matthew Brehmer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hyeok Kim"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1191","time_end":"","time_stamp":"","time_start":"","title":"Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms","uid":"v-short-1191","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Narrative visualization has become a crucial tool in data presentation, merging storytelling with data visualization to convey complex information in an engaging and accessible manner. In this study, we review the design space for narrative visualizations, focusing on animation style, through a comprehensive analysis of 71 papers from key visualization venues. We categorize these papers into six broad themes: Animation Style, Interactivity, Technology Usage, Methodology Development, Evaluation Type, and Application Domain. Our findings reveal a significant evolution in the field, marked by a growing preference for animated and non-interactive techniques. This trend reflects a shift towards minimizing user interaction while enhancing the clarity and impact of data presentation. We also identified key trends and technologies that have shaped the field, highlighting the role of technologies, such as machine learning in driving these changes. We offer insights into the dynamic interrelations within the narrative visualization domain, suggesting a future research trajectory that balances interactivity with automated tools to foster increased engagement. Our work lays the groundwork for future approaches for effective and innovative narrative visualization in diverse applications.","accessible_pdf":false,"authors":[{"affiliations":["Louisiana State University, Baton Rouge, United States"],"email":"jyang44@lsu.edu","is_corresponding":true,"name":"Vyri Junhan Yang"},{"affiliations":["Louisiana State University, Baton Rouge, United States"],"email":"mjasim@lsu.edu","is_corresponding":false,"name":"Mahmood Jasim"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Vyri Junhan Yang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1192","time_end":"","time_stamp":"","time_start":"","title":"Animating the Narrative: A Review of Animation Styles in Narrative Visualization","uid":"v-short-1192","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present LinkQ, a system that leverages a large language model (LLM) to facilitate knowledge graph (KG) query construction through natural language question-answering. Traditional approaches often require detailed knowledge of complex graph querying languages, limiting the ability for users -- even experts -- to acquire valuable insights from KG data. LinkQ simplifies this process by first interpreting a user's question, then converting it into a well-formed KG query. By using the LLM to construct a query instead of directly answering the user's question, LinkQ guards against the LLM hallucinating or generating false, erroneous information. By integrating an LLM into LinkQ, users are able to conduct both exploratory and confirmatory data analysis, with the LLM helping to iteratively refine open-ended questions into precise ones. To demonstrate the efficacy of LinkQ, we conducted a qualitative study with five KG practitioners and distill their feedback. Our results indicate that practitioners find LinkQ effective for KG question-answering, and desire future LLM-assisted systems for the exploratory analysis of graph databases.","accessible_pdf":false,"authors":[{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"harry.li@ll.mit.edu","is_corresponding":true,"name":"Harry Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"ashley.suh@ll.mit.edu","is_corresponding":false,"name":"Ashley Suh"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Harry Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1193","time_end":"","time_stamp":"","time_start":"","title":"LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering","uid":"v-short-1193","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In the digital landscape, the ubiquity of data visualizations in media underscores the necessity for accessibility to ensure inclusivity for all users, including those with visual impairments. Current visual content often fails to cater to the needs of screen reader users due to the absence of comprehensive textual descriptions. To address this gap, we propose in this paper a framework designed to empower media content creators to transform charts into descriptive narratives. This tool not only facilitates the understanding of complex visual data through text but also fosters a broader awareness of accessibility in digital content creation. Through the application of this framework, users can interpret and convey the insights of data visualizations more effectively, accommodating a diverse audience. Our evaluations reveal that this tool not only enhances the comprehension of data visualizations but also promotes new perspectives on the represented data, thereby broadening the interpretative possibilities for all users.","accessible_pdf":false,"authors":[{"affiliations":["Polytechnique Montr\u00e9al, Montr\u00e9al, Canada"],"email":"qiangxu1204@gmail.com","is_corresponding":true,"name":"Qiang Xu"},{"affiliations":["Polytechnique Montreal, Montreal, Canada"],"email":"thomas.hurtut@polymtl.ca","is_corresponding":false,"name":"Thomas Hurtut"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qiang Xu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1199","time_end":"","time_stamp":"","time_start":"","title":"From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions","uid":"v-short-1199","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"An essential task of an air traffic controller is to manage the traffic flow by predicting future trajectories. Complex traffic patterns are difficult to predict and manage and impose cognitive load on the air traffic controllers. In this work we present an interactive visual analytics interface which facilitates detection and resolution of complex traffic patterns for air traffic controllers. The interface supports the users in detecting complex clusters of aircraft and uses visual representations to communicate to the controllers how and propose re-routing. The interface further enables the ATCos to visualize and simultaneously compare how different re-routing strategies for each individual aircraft yield reduction of complexity in the entire sector for the next hour. The development of the concepts was supported by the domain-specific feedback we received from six fully licensed and operational air traffic controllers in an iterative design process over a period of 14 months.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"elmira.zohrevandi@liu.se","is_corresponding":true,"name":"Elmira Zohrevandi"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"katerina.vrotsou@liu.se","is_corresponding":false,"name":"Katerina Vrotsou"},{"affiliations":["Institute of Science and Technology, Norrk\u00f6ping, Sweden","Institute of Science and Technology, Norrk\u00f6ping, Sweden"],"email":"carl.westin@liu.se","is_corresponding":false,"name":"Carl A. L. Westin"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"jonas.lundberg@liu.se","is_corresponding":false,"name":"Jonas Lundberg"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden","Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"anders.ynnerman@liu.se","is_corresponding":false,"name":"Anders Ynnerman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Elmira Zohrevandi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1207","time_end":"","time_stamp":"","time_start":"","title":"Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity","uid":"v-short-1207","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Transfer function design is crucial in volume rendering, as it directly influences the visual representation and interpretation of volumetric data. However, creating effective transfer functions that align with users\u2019 visual objectives is often challenging due to the complex parameter space and the semantic gap between transfer function values and features of interest within the volume. In this work, we propose a novel approach that leverages recent advancements in language-vision models to bridge this semantic gap. By employing a fully differentiable rendering pipeline and an image-based loss function guided by language descriptions, our method generates transfer functions that yield volume-rendered images closely matching the user\u2019s intent. We demonstrate the effectiveness of our approach in creating meaningful transfer functions from simple descriptions, empowering users to intuitively express their desired visual outcomes with minimal effort. This advancement streamlines the transfer function design process and makes volume rendering more accessible to a broader range of users.","accessible_pdf":false,"authors":[{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"sangwon.jeong@vanderbilt.edu","is_corresponding":true,"name":"Sangwon Jeong"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["Lawrence Livermore National Laboratory , Livermore, United States"],"email":"shusenl@sci.utah.edu","is_corresponding":false,"name":"Shusen Liu"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"matthew.berger@vanderbilt.edu","is_corresponding":false,"name":"Matthew Berger"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sangwon Jeong"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1211","time_end":"","time_stamp":"","time_start":"","title":"Text-based transfer function design for semantic volume rendering","uid":"v-short-1211","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Diffusion-based generative models\u2019 impressive ability to create convincing images has garnered global attention. However, their complex structures and operations often pose challenges for non-experts to grasp. We present Diffusion Explainer, the first interactive visualization tool that explains how Stable Diffusion transforms text prompts into images. Diffusion Explainer tightly integrates a visual overview of Stable Diffusion\u2019s complex structure with explanations of the underlying operations. By comparing image generation of prompt variants, users can discover the impact of keyword changes on image generation. A 56-participant user study demonstrates that Diffusion Explainer offers substantial learning benefits to non-experts. Our tool has been used by over 10,300 users from 124 countries at https://poloclub.github.io/diffusion-explainer/.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"seongmin@gatech.edu","is_corresponding":true,"name":"Seongmin Lee"},{"affiliations":["GA Tech, Atlanta, United States","IBM Research AI, Cambridge, United States"],"email":"benjamin.hoover@ibm.com","is_corresponding":false,"name":"Benjamin Hoover"},{"affiliations":["IBM Research AI, Cambridge, United States"],"email":"hendrik@strobelt.com","is_corresponding":false,"name":"Hendrik Strobelt"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"jayw@gatech.edu","is_corresponding":false,"name":"Zijie J. Wang"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"speng65@gatech.edu","is_corresponding":false,"name":"ShengYun Peng"},{"affiliations":["Georgia Institute of Technology , Atlanta , United States"],"email":"apwright@gatech.edu","is_corresponding":false,"name":"Austin P Wright"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"kevin.li@gatech.edu","is_corresponding":false,"name":"Kevin Li"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"haekyu@gatech.edu","is_corresponding":false,"name":"Haekyu Park"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"alexanderyang@gatech.edu","is_corresponding":false,"name":"Haoyang Yang"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"polo@gatech.edu","is_corresponding":false,"name":"Duen Horng (Polo) Chau"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Seongmin Lee"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1224","time_end":"","time_stamp":"","time_start":"","title":"Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion","uid":"v-short-1224","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A high number of samples often leads to occlusion in scatterplots, which hinders data perception and analysis. De-cluttering approaches based on spatial transformation reduce visual clutter by remapping samples using the entire available scatterplot domain. Such regularized scatterplots may still be used for data analysis tasks, if the spatial transformation is smooth and preserves the original neighborhood relations of samples. Recently, Rave et al. proposed an efficient regularization method based on integral images. We propose a generalization of their regularization scheme using sector-based transformations with the aim of increasing sample uniformity of the resulting scatterplot. We document the improvement of our approach using various uniformity measures.","accessible_pdf":false,"authors":[{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"hennes.rave@uni-muenster.de","is_corresponding":true,"name":"Hennes Rave"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"molchano@uni-muenster.de","is_corresponding":false,"name":"Vladimir Molchanov"},{"affiliations":["University of M\u00fcnster, M\u00fcnster, Germany"],"email":"linsen@uni-muenster.de","is_corresponding":false,"name":"Lars Linsen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hennes Rave"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1235","time_end":"","time_stamp":"","time_start":"","title":"Uniform Sample Distribution in Scatterplots via Sector-based Transformation","uid":"v-short-1235","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Automatically generating data visualizations in response to human utterances on datasets necessitates a deep semantic understanding of the data utterance, including implicit and explicit references to data attributes, visualization tasks, and necessary data preparation steps. Natural Language Interfaces (NLIs) for data visualization have explored ways to infer such information, yet challenges persist due to inherent uncertainty in human speech. Recent advances in Large Language Models (LLMs) provide an avenue to address these challenges, but their ability to extract the relevant semantic information remains unexplored. In this study, we evaluate four publicly available LLMs (GPT-4, Gemini-Pro, Llama3, and Mixtral), investigating their ability to comprehend utterances even in the presence of uncertainty and identify the relevant data context and visual tasks. Our findings reveal that LLMs are sensitive to uncertainties in utterance. Despite this sensitivity, they are able to extract the relevant data context. However, LLMs struggle with inferring visualization tasks. Based on these results, we highlight future research directions on using LLMs for visualization generation. Our supplementary materials have been shared on OSF: https://osf.io/j342a/wiki/home/?view_only=b4051ffc6253496d9bce818e4a89b9f9","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, United States"],"email":"hbako@umd.edu","is_corresponding":true,"name":"Hannah K. Bako"},{"affiliations":["University of Maryland, College Park, United States"],"email":"arshnoorbhutani8@gmail.com","is_corresponding":false,"name":"Arshnoor Bhutani"},{"affiliations":["The University of Texas at Austin, Austin, United States"],"email":"xinyi.liu@utexas.edu","is_corresponding":false,"name":"Xinyi Liu"},{"affiliations":["University of Maryland, College Park, United States"],"email":"kcobbina@cs.umd.edu","is_corresponding":false,"name":"Kwesi Adu Cobbina"},{"affiliations":["University of Maryland, College Park, United States"],"email":"leozcliu@umd.edu","is_corresponding":false,"name":"Zhicheng Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hannah K. Bako"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1236","time_end":"","time_stamp":"","time_start":"","title":"Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization","uid":"v-short-1236","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Statistical practices such as building regression models or running hypothesis tests rely on following rigorous procedures of steps and verifying assumptions on data to produce valid results. However, common statistical tools do not verify users\u2019 decision choices and provide low-level statistical functions without instructions on the whole analysis practice. Users can easily misuse analysis methods, potentially decreasing the validity of results. To address this problem, we introduce GuidedStats, an interactive interface within computational notebooks that encapsulates guidance, models, visualization, and exportable results into interactive workflows. It breaks down typical analysis processes, such as linear regression and two-sample T-tests, into interactive steps supplemented with automatic visualizations and explanations for step-wise evaluation. Users can iterate on input choices to refine their models, while recommended actions and exports allow the user to continue their analysis in code. Case studies show how GuidedStats offers valuable instructions for conducting fluid statistical analyses while finding possible assumption violations in the underlying data, supporting flexible and accurate statistical analyses.","accessible_pdf":false,"authors":[{"affiliations":["New York University, New York, United States"],"email":"yz9381@nyu.edu","is_corresponding":true,"name":"Yuqi Zhang"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"adamperer@cmu.edu","is_corresponding":false,"name":"Adam Perer"},{"affiliations":["Carnegie Mellon University, Pittsburgh, United States"],"email":"willepp@cmu.edu","is_corresponding":false,"name":"Will Epperson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuqi Zhang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1248","time_end":"","time_stamp":"","time_start":"","title":"Guided Statistical Workflows with Interactive Explanations and Assumption Checking","uid":"v-short-1248","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The Local Moran's I statistic is a valuable tool for identifying localized patterns of spatial autocorrelation. Understanding these patterns is crucial in spatial analysis, but interpreting the statistic can be difficult. To simplify this process, we introduce three novel visualizations that enhance the interpretation of Local Moran's I results. These visualizations can be interactively linked to one another, and to established visualizations, to offer a more holistic exploration of the results. We provide a JavaScript library with implementations of these new visual elements, along with a web dashboard that demonstrates their integrated use.","accessible_pdf":false,"authors":[{"affiliations":["NIH, Rockville, United States","Queen's University, Belfast, United Kingdom"],"email":"masonlk@nih.gov","is_corresponding":true,"name":"Lee Mason"},{"affiliations":["Queen's University Belfast , Belfast , United Kingdom"],"email":"b.hicks@qub.ac.uk","is_corresponding":false,"name":"Bl\u00e1naid Hicks"},{"affiliations":["National Institutes of Health, Rockville, United States"],"email":"jonas.dealmeida@nih.gov","is_corresponding":false,"name":"Jonas S Almeida"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lee Mason"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1264","time_end":"","time_stamp":"","time_start":"","title":"Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation","uid":"v-short-1264","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This study examines the impact of positive and negative contrast polarities (i.e., light and dark modes) on the performance of younger adults and people in their late adulthood (PLA). In a crowdsourced study with 134 participants (69 below age 60, 66 aged 60 and above), we assessed their accuracy and time performing analysis tasks across three common visualization types (Bar, Line, Scatterplot) and two contrast polarities (positive and negative). We observed that, across both age groups, the polarity that led to better performance and the resulting amount of improvement varied on an individual basis, with each polarity benefiting comparable proportions of participants. Additionally, we observed that the choice of contrast polarity can have an impact on time similar to that of the choice of visualization type, resulting in an average percent difference of around 36%. These findings indicate that, overall, the effects of contrast polarity on visual analysis performance do not noticeably change with age. Furthermore, they underscore the importance of making visualizations available in both contrast polarities to better-support a broad audience with differing needs.","accessible_pdf":false,"authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"zwhile@cs.umass.edu","is_corresponding":true,"name":"Zack While"},{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"asarv@cs.umass.edu","is_corresponding":false,"name":"Ali Sarvghad"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zack While"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1274","time_end":"","time_stamp":"","time_start":"","title":"Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups","uid":"v-short-1274","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Machine Learning models for chart-grounded Q&A (CQA) often treat charts as images, but performing CQA on pixel values has proven challenging. We thus investigate a resource overlooked by current ML-based approaches: the declarative documents describing how charts should visually encode data (i.e., chart specifications). In this work, we use chart specifications to enhance language models (LMs) for chart-reading tasks, such that the resulting system can robustly understand language for CQA. Through a case study with 359 bar charts, we test novel fine tuning schemes on both GPT-3 and T5 using a new dataset curated for two CQA tasks: question-answering and visual explanation generation. Our text-only approaches strongly outperform vision-based GPT-4 on explanation generation (99% vs. 63% accuracy), and show promising results for question-answering (57-67% accuracy). Through in-depth experiments, we also show that our text-only approaches are mostly robust to natural language variation.","accessible_pdf":false,"authors":[{"affiliations":["Adobe Research, San Jose, United States"],"email":"victorbursztyn2022@u.northwestern.edu","is_corresponding":true,"name":"Victor S. Bursztyn"},{"affiliations":["Adobe Research, Seattle, United States"],"email":"jhoffs@adobe.com","is_corresponding":false,"name":"Jane Hoffswell"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"sguo@adobe.com","is_corresponding":false,"name":"Shunan Guo"},{"affiliations":["Adobe Research, San Jose, United States"],"email":"eunyee@adobe.com","is_corresponding":false,"name":"Eunyee Koh"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Victor S. Bursztyn"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1276","time_end":"","time_stamp":"","time_start":"","title":"Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts","uid":"v-short-1276","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Trust is a subjective yet fundamental component of human-computer interaction, and is a determining factor in shaping the efficacy of data visualizations. Prior research has identified five dimensions of trust assessment in visualizations (credibility, clarity, reliability, familiarity, and confidence), and observed that these dimensions tend to vary predictably along with certain features of the visualization being evaluated. This raises a further question: how do the design features driving viewers' trust assessment vary with the characteristics of the viewers themselves? By reanalyzing data from these studies through the lens of individual differences, we build a more detailed map of the relationships between design features, individual characteristics, and trust behaviors. In particular, we model the distinct contributions of endogenous design features (such as visualization type, or the use of color) and exogenous user characteristics (such as visualization literacy), as well as the interactions between them. We then use these findings to make recommendations for individualized and adaptive visualization design.","accessible_pdf":false,"authors":[{"affiliations":["Smith College, Northampton, United States"],"email":"jcrouser@smith.edu","is_corresponding":true,"name":"R. Jordan Crouser"},{"affiliations":["Smith College, Northampton, United States"],"email":"cmatoussi@smith.edu","is_corresponding":false,"name":"Syrine Matoussi"},{"affiliations":["Smith College, Northampton, United States"],"email":"ekung@smith.edu","is_corresponding":false,"name":"Lan Kung"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"p.saugat@wustl.edu","is_corresponding":false,"name":"Saugat Pandey"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"m.oen@wustl.edu","is_corresponding":false,"name":"Oen G McKinley"},{"affiliations":["Washington University in St. Louis, St. Louis, United States"],"email":"alvitta@wustl.edu","is_corresponding":false,"name":"Alvitta Ottley"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["R. Jordan Crouser"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1277","time_end":"","time_stamp":"","time_start":"","title":"Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization","uid":"v-short-1277","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This study examines the impact of social-comparison risk visualizations on public health communication, comparing the effects of traditional bar charts against alternative jitter plots emphasizing geographic variability (geo jitter). The research highlights that whereas both visualization types increased perceived vulnerability, behavioral intent, and policy support, the geo jitter plots were significantly more effective in reducing unjustified personal attributions. Importantly, the findings also underscore the emotional challenges faced by visualization viewers from marginalized communities, indicating a need for designs that are sensitive to the potential for reinforcing stereotypes or eliciting negative emotions. This work suggests a strategic reevaluation of visual communication tools in public health to enhance understanding and engagement without contributing to negative attributions or emotional distress.","accessible_pdf":false,"authors":[{"affiliations":["3iap, Raleigh, United States"],"email":"eli@3iap.com","is_corresponding":false,"name":"Eli Holder"},{"affiliations":["Northeastern University, Boston, United States","University of California Merced, Merced, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":true,"name":"Lace M. Padilla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lace M. Padilla"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1285","time_end":"","time_stamp":"","time_start":"","title":"\"Must Be a Tuesday\": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities","uid":"v-short-1285","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Collaborative planning for congenital heart diseases typically involves creating physical heart models through 3D printing, which are then examined by both surgeons and cardiologists. Recent developments in mobile augmented reality (AR) technologies have presented a viable alternative, known for their ease of use and portability. However, there is still a lack of research examining the utilization of multi-user mobile AR environments to support collaborative planning for cardiovascular surgeries. We created ARCollab, an iOS AR app designed for enabling multiple surgeons and cardiologists to interact with a patient's 3D heart model in a shared environment. ARCollab enables surgeons and cardiologists to import heart models, manipulate them through gestures and collaborate with other users, eliminating the need for fabricating physical heart models. Our evaluation of ARCollab's usability and usefulness in enhancing collaboration, conducted with three cardiothoracic surgeons and two cardiologists, marks the first human evaluation of a multi-user mobile AR tool for surgical planning. ARCollab is open-source, available at https://github.com/poloclub/arcollab.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"pratham.mehta001@gmail.com","is_corresponding":true,"name":"Pratham Darrpan Mehta"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"rnarayanan39@gatech.edu","is_corresponding":false,"name":"Rahul Ozhur Narayanan"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"harsha5431@gmail.com","is_corresponding":false,"name":"Harsha Karanth"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"alexanderyang@gatech.edu","is_corresponding":false,"name":"Haoyang Yang"},{"affiliations":["Emory University, Atlanta, United States"],"email":"slesnickt@kidsheart.com","is_corresponding":false,"name":"Timothy C Slesnick"},{"affiliations":["Emory University/Children's Healthcare of Atlanta, Atlanta, United States"],"email":"fawwaz.shaw@choa.org","is_corresponding":false,"name":"Fawwaz Shaw"},{"affiliations":["Georgia Tech, Atlanta, United States"],"email":"polo@gatech.edu","is_corresponding":false,"name":"Duen Horng (Polo) Chau"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Pratham Darrpan Mehta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1292","time_end":"","time_stamp":"","time_start":"","title":"Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning","uid":"v-short-1292","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Reactionary delay'' is a result of the accumulated cascading effects of knock-on train delays. It is becoming an increasing problem as shared railway infrastructure becomes more crowded. The chaotic nature of its effects is notoriously hard to predict. We use a stochastic Monte-Carto-style simulation of reactionary delay that produces whole distributions of likely reactionary delay. Our contribution is the demonstrating how Zoomable GlyphTables -- case-by-variable tables in which cases are rows, variables are columns, variables are complex composite metrics that incorporate distributions, and cells contain mini-charts that depict these as different level of detail through zoom interaction -- help interpret these results for helping understanding the causes and effects of reactionary delay and how they have been informing timetable robustness testing and tweaking. We describe our design principles, demonstrate how this supported our analytical tasks and we reflect on wider potential for Zoomable GlyphTables to be used more widely.","accessible_pdf":false,"authors":[{"affiliations":["City, University of London, London, United Kingdom"],"email":"a.slingsby@city.ac.uk","is_corresponding":true,"name":"Aidan Slingsby"},{"affiliations":["Risk Solutions, Warrington, United Kingdom"],"email":"jonathan.hyde@risksol.co.uk","is_corresponding":false,"name":"Jonathan Hyde"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Aidan Slingsby"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"short","presentation_mode":"","session_id":"short0","slot_id":"v-short-1301","time_end":"","time_stamp":"","time_start":"","title":"Zoomable Glyph Tables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays","uid":"v-short-1301","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"Short Papers","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"v-siggraph":{"event":"SIGGRAPH Invited Partnership Presentations","event_description":"","event_prefix":"v-siggraph","event_type":"invited","event_url":"","long_name":"SIGGRAPH Invited Partnership Presentations","organizers":[],"sessions":[]},"v-spotlights":{"event":"Application Spotlights","event_description":"","event_prefix":"v-spotlights","event_type":"application","event_url":"","long_name":"Application Spotlights","organizers":[],"sessions":[]},"v-tvcg":{"event":"TVCG Invited Presentations","event_description":"","event_prefix":"v-tvcg","event_type":"invited","event_url":"","long_name":"TVCG Invited Presentations","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"v-tvcg","ff_link":"","session_id":"tvcg0","session_image":"tvcg0.png","time_end":"","time_slots":[{"abstract":"Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Sungwon In"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tica Lin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chris North"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hanspeter Pfister"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yalong Yang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sungwon In"],"doi":"10.1109/TVCG.2023.3299602","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Immersive Analytics, Data Transformation, Data Science, Interaction, Empirical Study, Virtual/Augmented/Mixed Reality"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233299602","time_end":"","time_stamp":"","time_start":"","title":"This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality","uid":"v-tvcg-20233299602","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The dynamic network visualization design space consists of two major dimensions: network structural and temporal representation. As more techniques are developed and published, a clear need for evaluation and experimental comparisons between them emerges. Most studies explore the temporal dimension and diverse interaction techniques supporting the participants, focusing on a single structural representation. Empirical evidence about performance and preference for different visualization approaches is scattered over different studies, experimental settings, and tasks. This paper aims to comprehensively investigate the dynamic network visualization design space in two evaluations. First, a controlled study assessing participants' response times, accuracy, and preferences for different combinations of network structural and temporal representations on typical dynamic network exploration tasks, with and without the support of standard interaction methods. Second, the best-performing combinations from the first study are enhanced based on participants' feedback and evaluated in a heuristic-based qualitative study with visualization experts on a real-world network. Our results highlight node-link with animation and playback controls as the best-performing combination and the most preferred based on ratings. Matrices achieve similar performance to node-link in the first study but have considerably lower scores in our second evaluation. Similarly, juxtaposition exhibits evident scalability issues in more realistic analysis contexts.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Velitchko Filipov"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alessio Arleo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Markus B\u00f6gl"},{"affiliations":"","email":"","is_corresponding":false,"name":"Silvia Miksch"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Velitchko Filipov"],"doi":"10.1109/TVCG.2023.3310019","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233310019","time_end":"","time_stamp":"","time_start":"","title":"On Network Structural and Temporal Encodings: A Space and Time Odyssey","uid":"v-tvcg-20233310019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We visualize the predictions of multiple machine learning models to help biologists as they interactively make decisions about cell lineage---the development of a (plant) embryo from a single ovum cell. Based on a confocal microscopy dataset, traditionally biologists manually constructed the cell lineage, starting from this observation and reasoning backward in time to establish their inheritance. To speed up this tedious process, we make use of machine learning (ML) models trained on a database of manually established cell lineages to assist the biologist in cell assignment. Most biologists, however, are not familiar with ML, nor is it clear to them which model best predicts the embryo's development. We thus have developed a visualization system that is designed to support biologists in exploring and comparing ML models, checking the model predictions, detecting possible ML model mistakes, and deciding on the most likely embryo development. To evaluate our proposed system, we deployed our interface with six biologists in an observational study. Our results show that the visual representations of machine learning are easily understandable, and our tool, LineageD+, could potentially increase biologists' working efficiency and enhance the understanding of embryos.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Jiayi Hong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ross Maciejewski"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alain Trubuil"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tobias Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jiayi Hong"],"doi":"10.1109/TVCG.2023.3302308","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, visual analytics, machine learning, comparing ML predictions, human-AI teaming, plant biology, cell lineage"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233302308","time_end":"","time_stamp":"","time_start":"","title":"Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage","uid":"v-tvcg-20233302308","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A contiguous area cartogram is a geographic map in which the area of each region is proportional to numerical data (e.g., population size) while keeping neighboring regions connected. In this study, we investigated whether value-to-area legends (square symbols next to the values represented by the squares' areas) and grid lines aid map readers in making better area judgments. We conducted an experiment to determine the accuracy, speed, and confidence with which readers infer numerical data values for the mapped regions. We found that, when only informed about the total numerical value represented by the whole cartogram without any legend, the distribution of estimates for individual regions was centered near the true value with substantial spread. Legends with grid lines significantly reduced the spread but led to a tendency to underestimate the values. Comparing differences between regions or between cartograms revealed that legends and grid lines slowed the estimation without improving accuracy. However, participants were more likely to complete the tasks when legends and grid lines were present, particularly when the area units represented by these features could be interactively selected. We recommend considering the cartogram's use case and purpose before deciding whether to include grid lines or an interactive legend.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Kelvin L. T. Fung"},{"affiliations":"","email":"","is_corresponding":false,"name":"Simon T. Perrault"},{"affiliations":"","email":"","is_corresponding":false,"name":"Michael T. Gastner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Gastner"],"doi":"10.1109/TVCG.2023.3275925","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Task Analysis, Symbols, Data Visualization, Sociology, Visualization, Switches, Mice, Cartogram, Geovisualization, Interactive Data Exploration, Quantitative Evaluation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233275925","time_end":"","time_stamp":"","time_start":"","title":"Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms","uid":"v-tvcg-20233275925","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Cindy Xiong Bearfield"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chase Stokes"},{"affiliations":"","email":"","is_corresponding":false,"name":"Andrew Lovett"},{"affiliations":"","email":"","is_corresponding":false,"name":"Steven Franconeri"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Cindy Xiong Bearfield"],"doi":"10.1109/TVCG.2023.3289292","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["comparison, perception, visual grouping, bar charts, verbal conclusions."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233289292","time_end":"","time_stamp":"","time_start":"","title":"What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts","uid":"v-tvcg-20233289292","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Automated visualization recommendation facilitates the rapid creation of effective visualizations, which is especially beneficial for users with limited time and limited knowledge of data visualization. There is an increasing trend in leveraging machine learning (ML) techniques to achieve an end-to-end visualization recommendation. However, existing ML-based approaches implicitly assume that there is only one appropriate visualization for a specific dataset, which is often not true for real applications. Also, they often work like a black box, and are difficult for users to understand the reasons for recommending specific visualizations. To fill the research gap, we propose AdaVis, an adaptive and explainable approach to recommend one or multiple appropriate visualizations for a tabular dataset. It leverages a box embedding-based knowledge graph to well model the possible one-to-many mapping relations among different entities (i.e., data features, dataset columns, datasets, and visualization choices). The embeddings of the entities and relations can be learned from dataset-visualization pairs. Also, AdaVis incorporates the attention mechanism into the inference framework. Attention can indicate the relative importance of data features for a dataset and provide fine-grained explainability. Our extensive evaluations through quantitative metric evaluations, case studies, and user interviews demonstrate the effectiveness of AdaVis.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Songheng Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yong Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Haotian Li"},{"affiliations":"","email":"","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Songheng Zhang"],"doi":"10.1109/TVCG.2023.3316469","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization Recommendation, Logical Reasoning, Data Visualization, Knowledge Graph"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233316469","time_end":"","time_stamp":"","time_start":"","title":"AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'","uid":"v-tvcg-20233316469","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization linting is a proven effective tool in assisting users to follow established visualization guidelines. Despite its success, visualization linting for choropleth maps, one of the most popular visualizations on the internet, has yet to be investigated. In this paper, we present GeoLinter, a linting framework for choropleth maps that assists in creating accurate and robust maps. Based on a set of design guidelines and metrics drawing upon a collection of best practices from the cartographic literature, GeoLinter detects potentially suboptimal design decisions and provides further recommendations on design improvement with explanations at each step of the design process. We perform a validation study to evaluate the proposed framework's functionality with respect to identifying and fixing errors and apply its results to improve the robustness of GeoLinter. Finally, we demonstrate the effectiveness of the GeoLinter - validated through empirical studies - by applying it to a series of case studies using real-world datasets.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Fan Lei"},{"affiliations":"","email":"","is_corresponding":true,"name":"Arlen Fan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alan M. MacEachren"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ross Maciejewski"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arlen Fan"],"doi":"10.1109/TVCG.2023.3322372","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization , Image color analysis , Geology , Recommender systems , Guidelines , Bars , Visualization Author Keywords: Automated visualization design , choropleth maps , visualization linting , visualization recommendation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233322372","time_end":"","time_stamp":"","time_start":"","title":"GeoLinter: A Linting Framework for Choropleth Maps","uid":"v-tvcg-20233322372","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Researchers have derived many theoretical models for specifying users\u2019 insights as they interact with a visualization system. These representations are essential for understanding the insight discovery process, such as when inferring user interaction patterns that lead to insight or assessing the rigor of reported insights. However, theoretical models can be difficult to apply to existing tools and user studies, often due to discrepancies in how insight and its constituent parts are defined. This paper calls attention to the consistent structures that recur across the visualization literature and describes how they connect multiple theoretical representations of insight. We synthesize a unified formalism for insights using these structures, enabling a wider audience of researchers and developers to adopt the corresponding models. Through a series of theoretical case studies, we use our formalism to compare and contrast existing theories, revealing interesting research challenges in reasoning about a user's domain knowledge and leveraging synergistic approaches in data mining and data management research.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Leilani Battle"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alvitta Ottley"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leilani Battle"],"doi":"10.1109/TVCG.2023.3326698","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233326698","time_end":"","time_stamp":"","time_start":"","title":"What Do We Mean When We Say \u201cInsight\u201d? A Formal Synthesis of Existing Theory","uid":"v-tvcg-20233326698","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents a computational framework for the concise encoding of an ensemble of persistence diagrams, in the form of weighted Wasserstein barycenters [100], [102] of a dictionary of atom diagrams. We introduce a multi-scale gradient descent approach for the efficient resolution of the corresponding minimization problem, which interleaves the optimization of the barycenter weights with the optimization of the atom diagrams. Our approach leverages the analytic expressions for the gradient of both sub-problems to ensure fast iterations and it additionally exploits shared-memory parallelism. Extensive experiments on public ensembles demonstrate the efficiency of our approach, with Wasserstein dictionary computations in the orders of minutes for the largest examples. We show the utility of our contributions in two applications. First, we apply Wassserstein dictionaries to data reduction and reliably compress persistence diagrams by concisely representing them with their weights in the dictionary. Second, we present a dimensionality reduction framework based on a Wasserstein dictionary defined with a small number of atoms (typically three) and encode the dictionary as a low dimensional simplex embedded in a visual space (typically in 2D). In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Keanu Sisouk"},{"affiliations":"","email":"","is_corresponding":false,"name":"Julie Delon"},{"affiliations":"","email":"","is_corresponding":true,"name":"Julien Tierny"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Julien Tierny"],"doi":"10.1109/TVCG.2023.3330262","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Topological data analysis, ensemble data, persistence diagrams"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233330262","time_end":"","time_stamp":"","time_start":"","title":"Wasserstein Dictionaries of Persistence Diagrams","uid":"v-tvcg-20233330262","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present Submerse, an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Saeed Boorboor"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yoonsang Kim"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ping Hu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Josef Moses"},{"affiliations":"","email":"","is_corresponding":false,"name":"Brian Colle"},{"affiliations":"","email":"","is_corresponding":false,"name":"Arie E. Kaufman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Saeed Boorboor"],"doi":"10.1109/TVCG.2023.3332511","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Camera navigation, flooding simulation visualization, immersive visualization, mixed reality"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233332511","time_end":"","time_stamp":"","time_start":"","title":"Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies","uid":"v-tvcg-20233332511","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization design studies bring together visualization researchers and domain experts to address yet unsolved data analysis challenges stemming from the needs of the domain experts. Typically, the visualization researchers lead the design study process and implementation of any visualization solutions. This setup leverages the visualization researchers' knowledge of methodology, design, and programming, but the availability to synchronize with the domain experts can hamper the design process. We consider an alternative setup where the domain experts take the lead in the design study, supported by the visualization experts. In this study, the domain experts are computer architecture experts who simulate and analyze novel computer chip designs. These chips rely on a Network-on-Chip (NOC) to connect components. The experts want to understand how the chip designs perform and what in the design led to their performance. To aid this analysis, we develop Vis4Mesh, a visualization system that provides spatial, temporal, and architectural context to simulated NOC behavior. Integration with an existing computer architecture visualization tool enables architects to perform deep-dives into specific architecture component behavior. We validate Vis4Mesh through a case study and a user study with computer architecture researchers. We reflect on our design and process, discussing advantages, disadvantages, and guidance for engaging in a domain expert-led design studies.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Shaoyu Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hang Yan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Katherine E. Isaacs"},{"affiliations":"","email":"","is_corresponding":true,"name":"Yifan Sun"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yifan Sun"],"doi":"10.1109/TVCG.2023.3337173","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Design Study, Network-on-Chip, Performance Analysis"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233337173","time_end":"","time_stamp":"","time_start":"","title":"Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study","uid":"v-tvcg-20233337173","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visual and interactive machine learning systems (IML) are becoming ubiquitous as they empower individuals with varied machine learning expertise to analyze data. However, it remains complex to align interactions with visual marks to a user\u2019s intent for steering machine learning models. We explore using data and visual design probes to elicit users\u2019 desired interactions to steer ML models via visual encodings within IML interfaces. We conducted an elicitation study with 20 data analysts with varying expertise in ML. We summarize our findings as pairs of target-interaction, which we compare to prior systems to assess the utility of the probes. We additionally surfaced insights about factors influencing how and why participants chose to interact with visual encodings, including refraining from interacting. Finally, we reflect on the value of gathering such formative empirical evidence via data and visual design probes ahead of developing IML prototypes. ","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Anamaria Crisan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Maddie Shang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Eric Brochu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anamaria Crisan"],"doi":"10.1109/TVCG.2023.3322898","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Design Probes, Interactive Machine Learning, Model Steering, Semantic Interaction"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233322898","time_end":"","time_stamp":"","time_start":"","title":"Eliciting Model Steering Interactions from Users via Data and Visual Design Probes","uid":"v-tvcg-20233322898","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper investigates the role of text in visualizations, specifically the impact of text position, semantic content, and biased wording. Two empirical studies were conducted based on two tasks (predicting data trends and appraising bias) using two visualization types (bar and line charts). While the addition of text had a minimal effect on how people perceive data trends, there was a significant impact on how biased they perceive the authors to be. This finding revealed a relationship between the degree of bias in textual information and the perception of the authors' bias. Exploratory analyses support an interaction between a person's prediction and the degree of bias they perceived. This paper also develops a crowdsourced method for creating chart annotations that range from neutral to highly biased. This research highlights the need for designers to mitigate potential polarization of readers' opinions based on how authors' ideas are expressed.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Chase Stokes"},{"affiliations":"","email":"","is_corresponding":false,"name":"Cindy Xiong Bearfield"},{"affiliations":"","email":"","is_corresponding":false,"name":"Marti Hearst"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chase Stokes"],"doi":"10.1109/TVCG.2023.3338451","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, text, annotation, perceived bias, judgment, prediction"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233338451","time_end":"","time_stamp":"","time_start":"","title":"The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions","uid":"v-tvcg-20233338451","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Molecular docking is a key technique in various fields like structural biology, medicinal chemistry, and biotechnology. It is widely used for virtual screening during drug discovery, computer-assisted drug design, and protein engineering. A general molecular docking process consists of the target and ligand selection, their preparation, and the docking process itself, followed by the evaluation of the results. However, the most commonly used docking software provides no or very basic evaluation possibilities. Scripting and external molecular viewers are often used, which are not designed for an efficient analysis of docking results. Therefore, we developed InVADo, a comprehensive interactive visual analysis tool for large docking data. It consists of multiple linked 2D and 3D views. It filters and spatially clusters the data, and enriches it with post-docking analysis results of protein-ligand interactions and functional groups, to enable well-founded decision-making. In an exemplary case study, domain experts confirmed that InVADo facilitates and accelerates the analysis workflow. They rated it as a convenient, comprehensive, and feature-rich tool, especially useful for virtual screening.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Marco Sch\u00e4fer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nicolas Brich"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jan By\u0161ka"},{"affiliations":"","email":"","is_corresponding":false,"name":"S\u00e9rgio M. Marques"},{"affiliations":"","email":"","is_corresponding":false,"name":"David Bedn\u00e1\u0159"},{"affiliations":"","email":"","is_corresponding":false,"name":"Philipp Thiel"},{"affiliations":"","email":"","is_corresponding":false,"name":"Barbora Kozl\u00edkov\u00e1"},{"affiliations":"","email":"","is_corresponding":true,"name":"Michael Krone"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Michael Krone"],"doi":"10.1109/TVCG.2023.3337642","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Molecular Docking, AutoDock, Virtual Screening, Visual Analysis, Visualization, Clustering, Protein-Ligand Interaction."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233337642","time_end":"","time_stamp":"","time_start":"","title":"InVADo: Interactive Visual Analysis of Molecular Docking Data","uid":"v-tvcg-20233337642","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for compressing large-scale time-varying data. The approach consists of two stages: spatial compression and model aggregation. In the first stage, each time step is compressed using an implicit neural representation with bottleneck layers and features of interest preservation-based sampling. In the second stage, we utilize an offline knowledge distillation algorithm to extract knowledge from the trained models and aggregate it into a single model. We evaluated our approach on a variety of time-varying volumetric data sets. Both quantitative and qualitative results, such as PSNR, LPIPS, and rendered images, demonstrate that KD-INR surpasses the state-of-the-art approaches, including learning-based (i.e., CoordNet, NeurComp, and SIREN) and lossy compression (i.e., SZ3, ZFP, and TTHRESH) methods, at various compression ratios ranging from hundreds to ten thousand.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Jun Han"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hao Zheng"},{"affiliations":"","email":"","is_corresponding":false,"name":"Change Bi"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Han Jun"],"doi":"10.1109/TVCG.2023.3345373","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Time-varying data compression, implicit neural representation, knowledge distillation, volume visualization."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233345373","time_end":"","time_stamp":"","time_start":"","title":"KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation","uid":"v-tvcg-20233345373","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Quantum computing offers significant speedup compared to classical computing, which has led to a growing interest among users in learning and applying quantum computing across various applications. However, quantum circuits, which are fundamental for implementing quantum algorithms, can be challenging for users to understand due to their underlying logic, such as the temporal evolution of quantum states and the effect of quantum amplitudes on the probability of basis quantum states. To fill this research gap, we propose QuantumEyes, an interactive visual analytics system to enhance the interpretability of quantum circuits through both global and local levels. For the global-level analysis, we present three coupled visualizations to delineate the changes of quantum states and the underlying reasons: a Probability Summary View to overview the probability evolution of quantum states; a State Evolution View to enable an in-depth analysis of the influence of quantum gates on the quantum states; a Gate Explanation View to show the individual qubit states and facilitate a better understanding of the effect of quantum gates. For the local-level analysis, we design a novel geometrical visualization dandelion chart to explicitly reveal how the quantum amplitudes affect the probability of the quantum state. We thoroughly evaluated QuantumEyes as well as the novel dandelion chart integrated into it through two case studies on different types of quantum algorithms and in-depth expert interviews with 12 domain experts. The results demonstrate the effectiveness and usability of our approach in enhancing the interpretability of quantum circuits.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Shaolun Ruan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Qiang Guan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Paul Griffin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ying Mao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yong Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shaolun Ruan"],"doi":"10.1109/TVCG.2023.3332999","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, design study, interpretability, quantum computing."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233332999","time_end":"","time_stamp":"","time_start":"","title":"QuantumEyes: Towards Better Interpretability of Quantum Circuits","uid":"v-tvcg-20233332999","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents a computational framework for the Wasserstein auto-encoding of merge trees (MT-WAE), a novel extension of the classical auto-encoder neural network architecture to the Wasserstein metric space of merge trees. In contrast to traditional auto-encoders which operate on vectorized data, our formulation explicitly manipulates merge trees on their associated metric space at each layer of the network, resulting in superior accuracy and interpretability. Our novel neural network approach can be interpreted as a non-linear generalization of previous linear attempts [79] at merge tree encoding. It also trivially extends to persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our algorithms, with MT-WAE computations in the orders of minutes on average. We show the utility of our contributions in two applications adapted from previous work on merge tree encoding [79]. First, we apply MT-WAE to merge tree compression, by concisely representing them with their coordinates in the final layer of our auto-encoder. Second, we document an application to dimensionality reduction, by exploiting the latent space of our auto-encoder, for the visual analysis of ensemble data. We illustrate the versatility of our framework by introducing two penalty terms, to help preserve in the latent space both the Wasserstein distances between merge trees, as well as their clusters. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used for reproducibility.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Mathieu Pont"},{"affiliations":"","email":"","is_corresponding":true,"name":"Julien Tierny"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Julien Tierny"],"doi":"10.1109/TVCG.2023.3334755","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Topological data analysis, ensemble data, persistence diagrams, merge trees, auto-encoders, neural networks"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233334755","time_end":"","time_stamp":"","time_start":"","title":"Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)","uid":"v-tvcg-20233334755","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present VoxAR, a method to facilitate an effective visualization of volume-rendered objects in optical see-through head-mounted displays (OST-HMDs). The potential of augmented reality (AR) to integrate digital information into the physical world provides new opportunities for visualizing and interpreting scientific data. However, a limitation of OST-HMD technology is that rendered pixels of a virtual object can interfere with the colors of the real-world, making it challenging to perceive the augmented virtual information accurately. We address this challenge in a two-step approach. First, VoxAR determines an appropriate placement of the volume-rendered object in the real-world scene by evaluating a set of spatial and environmental objectives, managed as user-selected preferences and pre-defined constraints. We achieve a real-time solution by implementing the objectives using a GPU shader language. Next, VoxAR adjusts the colors of the input transfer function (TF) based on the real-world placement region. Specifically, we introduce a novel optimization method that adjusts the TF colors such that the resulting volume-rendered pixels are discernible against the background and the TF maintains the perceptual mapping between the colors and data intensity values. Finally, we present an assessment of our approach through objective evaluations and subjective user studies.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Saeed Boorboor"},{"affiliations":"","email":"","is_corresponding":false,"name":"Matthew S. Castellana"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yoonsang Kim"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zhutian Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Johanna Beyer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hanspeter Pfister"},{"affiliations":"","email":"","is_corresponding":false,"name":"Arie E. Kaufman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Saeed Boorboor"],"doi":"10.1109/TVCG.2023.3340770","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Adaptive Visualization, Situated Visualization, Augmented Reality, Volume Rendering"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233340770","time_end":"","time_stamp":"","time_start":"","title":"VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality","uid":"v-tvcg-20233340770","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Label quality issues, such as noisy labels and imbalanced class distributions, have negative effects on model performance. Automatic reweighting methods identify problematic samples with label quality issues by recognizing their negative effects on validation samples and assigning lower weights to them. However, these methods fail to achieve satisfactory performance when the validation samples are of low quality. To tackle this, we develop Reweighter, a visual analysis tool for sample reweighting. The reweighting relationships between validation samples and training samples are modeled as a bipartite graph. Based on this graph, a validation sample improvement method is developed to improve the quality of validation samples. Since the automatic improvement may not always be perfect, a co-cluster-based bipartite graph visualization is developed to illustrate the reweighting relationships and support the interactive adjustments to validation samples and reweighting results. The adjustments are converted into the constraints of the validation sample improvement method to further improve validation samples. We demonstrate the effectiveness of Reweighter in improving reweighting results through quantitative evaluation and two case studies.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Weikai Yang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yukai Guo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jing Wu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zheng Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lan-Zhe Guo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yu-Feng Li"},{"affiliations":"","email":"","is_corresponding":false,"name":"Shixia Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Weikai Yang"],"doi":"10.1109/TVCG.2023.3345340","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233345340","time_end":"","time_stamp":"","time_start":"","title":"Interactive Reweighting for Mitigating Label Quality Issues","uid":"v-tvcg-20233345340","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We examined user preferences to combine multiple interaction modalities for collaborative interaction with data shown on large vertical displays. Large vertical displays facilitate visual data exploration and allow the use of diverse interaction modalities by multiple users at different distances from the screen. Yet, how to offer multiple interaction modalities is a non-trivial problem. We conducted an elicitation study with 20 participants that generated 1015 interaction proposals combining touch, speech, pen, and mid-air gestures. Given the opportunity to interact using these four 13 modalities, participants preferred speech interaction in 10 of 15 14 low-level tasks and direct manipulation for straightforward tasks 15 such as showing a tooltip or selecting. In contrast to previous work, 16 participants most favored unimodal and personal interactions. We 17 identified what we call collaborative synonyms among their interaction proposals and found that pairs of users collaborated either unimodally and simultaneously or multimodally and sequentially. We provide insights into how end-users associate visual exploration tasks with certain modalities and how they collaborate at different interaction distances using specific interaction modalities. The supplemental material is available at https://osf.io/m8zuh.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Gabriela Molina Le\u00f3n"},{"affiliations":"","email":"","is_corresponding":false,"name":"Petra Isenberg"},{"affiliations":"","email":"","is_corresponding":false,"name":"Andreas Breiter"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gabriela Molina Le\u00f3n"],"doi":"10.1109/TVCG.2023.3323150","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Multimodal interaction, collaborative work, large vertical displays, elicitation study, spatio-temporal data"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233323150","time_end":"","time_stamp":"","time_start":"","title":"Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays","uid":"v-tvcg-20233323150","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data integration is often performed to consolidate information from multiple disparate data sources during visual data analysis. However, integration operations are usually separate from visual analytics operations such as encode and filter in both interface design and empirical research. We conducted a preliminary user study to investigate whether and how data integration should be incorporated directly into the visual analytics process. We used two interface alternatives featuring contrasting approaches to the data preparation and analysis workflow: manual file-based ex-situ integration as a separate step from visual analytics operations; and automatic UI-based in-situ integration merged with visual analytics operations. Participants were asked to complete specific and free-form tasks with each interface, browsing for patterns, generating insights, and summarizing relationships between attributes distributed across multiple files. Analyzing participants' interactions and feedback, we found both task completion time and total interactions to be similar across interfaces and tasks, as well as unique integration strategies between interfaces and emergent behaviors related to satisficing and cognitive bias. Participants' time spent and interactions revealed that in-situ integration enabled users to spend more time on analysis tasks compared with ex-situ integration. Participants' integration strategies and analytical behaviors revealed differences in interface usage for generating and tracking hypotheses and insights. With these results, we synthesized preliminary guidelines for designing future visual analytics interfaces that can support integrating attributes throughout an active analysis process.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Adam Coscia"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ashley Suh"},{"affiliations":"","email":"","is_corresponding":false,"name":"Remco Chang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alex Endert"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Adam Coscia"],"doi":"10.1109/TVCG.2023.3334513","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, Data integration, User interface design, Integration strategies, Analytical behaviors."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233334513","time_end":"","time_stamp":"","time_start":"","title":"Preliminary Guidelines For Combining Data Integration and Visual Data Analysis","uid":"v-tvcg-20233334513","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We report on challenges and considerations for supporting design processes for visualizations in motion embedded in sports videos. We derive our insights from analyzing swimming race visualizations and motion-related data, building a technology probe, as well as a study with designers. Understanding how to design situated visualizations in motion is important for a variety of contexts. Competitive sports coverage, in particular, increasingly includes information on athlete or team statistics and records. Although moving visual representations attached to athletes or other targets are starting to appear, systematic investigations on how to best support their design process in the context of sports videos are still missing. Our work makes several contributions in identifying opportunities for visualizations to be added to swimming competition coverage but, most importantly, in identifying requirements and challenges for designing situated visualizations in motion. Our investigations include the analysis of a survey with swimming enthusiasts on their motion-related information needs, an ideation workshop to collect designs and elicit design challenges, the design of a technology probe that allows to create embedded visualizations in motion based on real data, and an evaluation with visualization designers that aimed to understand the benefits of designing directly on videos.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Lijie Yao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Romain Vuillemot"},{"affiliations":"","email":"","is_corresponding":false,"name":"Anastasia Bezerianos"},{"affiliations":"","email":"","is_corresponding":false,"name":"Petra Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lijie Yao"],"doi":"10.1109/TVCG.2023.3341990","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Sports, Videos, Probes, Surveys, Authoring systems, Games, Design framework, Embedded visualization, Sports analytics, Visualization in motion"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233341990","time_end":"","time_stamp":"","time_start":"","title":"Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos","uid":"v-tvcg-20233341990","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Recent growth in the popularity of large language models has led to their increased usage for summarizing, predicting, and generating text, making it vital to help researchers and engineers understand how and why they work. We present KnowledgeVIS , a human-in-the-loop visual analytics system for interpreting language models using fill-in-the-blank sentences as prompts. By comparing predictions between sentences, KnowledgeVIS reveals learned associations that intuitively connect what language models learn during training to natural language tasks downstream, helping users create and test multiple prompt variations, analyze predicted words using a novel semantic clustering technique, and discover insights using interactive visualizations. Collectively, these visualizations help users identify the likelihood and uniqueness of individual predictions, compare sets of predictions between prompts, and summarize patterns and relationships between predictions across all prompts. We demonstrate the capabilities of KnowledgeVIS with feedback from six NLP experts as well as three different use cases: (1) probing biomedical knowledge in two domain-adapted models; and (2) evaluating harmful identity stereotypes and (3) discovering facts and relationships between three general-purpose models.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Adam Coscia"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alex Endert"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Adam Coscia"],"doi":"10.1109/TVCG.2023.3346713","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, language models, prompting, interpretability, machine learning."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233346713","time_end":"","time_stamp":"","time_start":"","title":"KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts","uid":"v-tvcg-20233346713","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Ensembles of contours arise in various applications like simulation, computer-aided design, and semantic segmentation. Uncovering ensemble patterns and analyzing individual members is a challenging task that suffers from clutter. Ensemble statistical summarization can alleviate this issue by permitting analyzing ensembles' distributional components like the mean and median, confidence intervals, and outliers. Contour boxplots, powered by Contour Band Depth (CBD), are a popular non-parametric ensemble summarization method that benefits from CBD's generality, robustness, and theoretical properties. In this work, we introduce Inclusion Depth (ID), a new notion of contour depth with three defining characteristics. First, ID is a generalization of functional Half-Region Depth, which offers several theoretical guarantees. Second, ID relies on a simple principle: the inside/outside relationships between contours. This facilitates implementing ID and understanding its results. Third, the computational complexity of ID scales quadratically in the number of members of the ensemble, improving CBD's cubic complexity. This also in practice speeds up the computation enabling the use of ID for exploring large contour ensembles or in contexts requiring multiple depth evaluations like clustering. In a series of experiments on synthetic data and case studies with meteorological and segmentation data, we evaluate ID's performance and demonstrate its capabilities for the visual analysis of contour ensembles.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Nicolas F. Chaves-de-Plaza"},{"affiliations":"","email":"","is_corresponding":false,"name":"Prerak Mody"},{"affiliations":"","email":"","is_corresponding":false,"name":"Marius Staring"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ren\u00e9 van Egmond"},{"affiliations":"","email":"","is_corresponding":false,"name":"Anna Vilanova"},{"affiliations":"","email":"","is_corresponding":false,"name":"Klaus Hildebrandt"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicol\u00e1s Ch\u00e1ves"],"doi":"10.1109/TVCG.2024.3350076","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Uncertainty visualization, contours, ensemble summarization, depth statistics."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243350076","time_end":"","time_stamp":"","time_start":"","title":"Inclusion Depth for Contour Ensembles","uid":"v-tvcg-20243350076","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Interactive visualization can support fluid exploration but is often limited to predetermined tasks. Scripting can support a vast range of queries but may be more cumbersome for free-form exploration. Embedding interactive visualization in scripting environments, such as computational notebooks, provides an opportunity to leverage the strengths of both direct manipulation and scripting. We investigate interactive visualization design methodology, choices, and strategies under this paradigm through a design study of calling context trees used in performance analysis, a field which exemplifies typical exploratory data analysis workflows with big data and hard to define problems. We first produce a formal task analysis assigning tasks to graphical or scripting contexts based on their specificity, frequency, and suitability. We then design a notebook-embedded interactive visualization and validate it with intended users. In a follow-up study, we present participants with multiple graphical and scripting interaction modes to elicit feedback about notebook-embedded visualization design, finding consensus in support of the interaction model. We report and reflect on observations regarding the process and design implications for combining visualization and scripting in notebooks.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Connor Scully-Allison"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ian Lumsden"},{"affiliations":"","email":"","is_corresponding":false,"name":"Katy Williams"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jesse Bartels"},{"affiliations":"","email":"","is_corresponding":false,"name":"Michela Taufer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Stephanie Brink"},{"affiliations":"","email":"","is_corresponding":false,"name":"Abhinav Bhatele"},{"affiliations":"","email":"","is_corresponding":false,"name":"Olga Pearce"},{"affiliations":"","email":"","is_corresponding":false,"name":"Katherine E. Isaacs"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Connor Scully-Allison"],"doi":"10.1109/TVCG.2024.3354561","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Exploratory Data Analysis, Interactive Data Analysis, Computational Notebooks, Hybrid Visualization-Scripting, Visualization Design"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243354561","time_end":"","time_stamp":"","time_start":"","title":"Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments","uid":"v-tvcg-20243354561","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"News articles containing data visualizations play an important role in informing the public on issues ranging from public health to politics. Recent research on the persuasive appeal of data visualizations suggests that prior attitudes can be notoriously difficult to change. Inspired by an NYT article, we designed two experiments to evaluate the impact of elicitation and contrasting narratives on attitude change, recall, and engagement. We hypothesized that eliciting prior beliefs leads to more elaborative thinking that ultimately results in higher attitude change, better recall, and engagement. Our findings revealed that visual elicitation leads to higher engagement in terms of feelings of surprise. While there is an overall attitude change across all experiment conditions, we did not observe a significant effect of belief elicitation on attitude change. With regard to recall error, while participants in the draw trend elicitation exhibited significantly lower recall error than participants in the categorize trend condition, we found no significant difference in recall error when comparing elicitation conditions to no elicitation. In a follow-up study, we added contrasting narratives with the purpose of making the main visualization (communicating data on the focal issue) appear strikingly different. Compared to the results of Study 1, we found that contrasting narratives improved engagement in terms of surprise and interest but interestingly resulted in higher recall error and no significant change in attitude. We discuss the effects of elicitation and contrasting narratives in the context of topic involvement and the strengths of temporal trends encoded in the data visualization.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Milad Rogha"},{"affiliations":"","email":"","is_corresponding":false,"name":"Subham Sah"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alireza Karduni"},{"affiliations":"","email":"","is_corresponding":false,"name":"Douglas Markant"},{"affiliations":"","email":"","is_corresponding":false,"name":"Wenwen Dou"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Milad Rogha"],"doi":"10.1109/TVCG.2024.3355884","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Market Research, Visualization, Uncertainty, Data Models, Correlation, Attitude Control, Belief Elicitation, Visual Elicitation, Data Visualization, Contrasting Narratives"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243355884","time_end":"","time_stamp":"","time_start":"","title":"The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization","uid":"v-tvcg-20243355884","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Partitioning a dynamic network into subsets (i.e., snapshots) based on disjoint time intervals is a widely used technique for understanding how structural patterns of the network evolve. However, selecting an appropriate time window (i.e., slicing a dynamic network into snapshots) is challenging and time-consuming, often involving a trial-and-error approach to investigating underlying structural patterns. To address this challenge, we present MoNetExplorer, a novel interactive visual analytics system that leverages temporal network motifs to provide recommendations for window sizes and support users in visually comparing different slicing results. MoNetExplorer provides a comprehensive analysis based on window size, including (1) a temporal overview to identify the structural information, (2) temporal network motif composition, and (3) node-link-diagram-based details to enable users to identify and understand structural patterns at various temporal resolutions. To demonstrate the effectiveness of our system, we conducted a case study with network researchers using two real-world dynamic network datasets. Our case studies show that the system effectively supports users to gain valuable insights into the temporal and structural aspects of dynamic networks.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Seokweon Jung"},{"affiliations":"","email":"","is_corresponding":false,"name":"DongHwa Shin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hyeon Jeon"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kiroong Choe"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Seokweon Jung"],"doi":"10.1109/TVCG.2023.3337396","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, Measurement, Size measurement, Windows, Time measurement, Data visualization, Task analysis, Visual analytics, Dynamic networks, Temporal network motifs, Interactive network slicing"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233337396","time_end":"","time_stamp":"","time_start":"","title":"A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs","uid":"v-tvcg-20233337396","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We conduct two in-lab experiments (N=93) to evaluate the effectiveness of Gantt charts, extended Gantt charts, and stringline charts for visualizing fixed-order event sequence data. We first formulate five types of event sequences and define three types of sequence elements: point events, interval events, and the temporal gaps between them. Our two experiments focus on event sequences with a pre-defined, fixed order, and measure task error rates and completion time. The first experiment shows single sequences and assesses the three charts' performance in comparing event duration or gap. The second experiment shows multiple sequences and evaluates how well the charts reveal temporal patterns. The results suggest that when visualizing single fixed-order event sequences, 1) Gantt and extended Gantt charts lead to comparable error rates in the duration-comparing task; 2) Gantt charts exhibit either shorter or equal completion time than extended Gantt charts; 3) both Gantt and extended Gantt charts demonstrate shorter completion times than stringline charts; 4) however, stringline charts outperform the other two charts with fewer errors in the comparing task when event type counts are high. Additionally, when visualizing multiple point-based fixed-order event sequences, stringline charts require less time than Gantt charts for people to find temporal patterns. Based on these findings, we discuss design opportunities for visualizing fixed-order event sequences and discuss future avenues for optimizing these charts.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Junxiu Tang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Fumeng Yang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiang Wu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yifang Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiayi Zhou"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xiwen Cai"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lingyun Yu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Junxiu Tang"],"doi":"10.1109/TVCG.2024.3358919","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Gantt chart, stringline chart, Marey's graph, event sequence, empirical study"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243358919","time_end":"","time_stamp":"","time_start":"","title":"A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts","uid":"v-tvcg-20243358919","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Seasonal-trend decomposition based on loess (STL) is a powerful tool to explore time series data visually. In this paper, we present an extension of STL to uncertain data, named uncertainty-aware STL (UASTL). Our method propagates multivariate Gaussian distributions mathematically exactly through the entire analysis and visualization pipeline. Thereby, stochastic quantities shared between the components of the decomposition are preserved. Moreover, we present application scenarios with uncertainty modeling based on Gaussian processes, e.g., data with uncertain areas or missing values. Besides these mathematical results and modeling aspects, we introduce visualization techniques that address the challenges of uncertainty visualization and the problem of visualizing highly correlated components of a decomposition. The global uncertainty propagation enables the time series visualization with STL-consistent samples, the exploration of correlation between and within decomposition's components, and the analysis of the impact of varying uncertainty. Finally, we show the usefulness of UASTL and the importance of uncertainty visualization with several examples. Thereby, a comparison with conventional STL is performed.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Tim Krake"},{"affiliations":"","email":"","is_corresponding":false,"name":"Daniel Kl\u00f6tzl"},{"affiliations":"","email":"","is_corresponding":false,"name":"David H\u00e4gele"},{"affiliations":"","email":"","is_corresponding":false,"name":"Daniel Weiskopf"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tim Krake"],"doi":"10.1109/TVCG.2024.3364388","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["- I.6.9.g Visualization techniques and methodologies < I.6.9 Visualization < I.6 Simulation, Modeling, and Visualization < I Compu - G.3 Probability and Statistics < G Mathematics of Computing - G.3.n Statistical computing < G.3 Probability and Statistics < G Mathematics of Computing - G.3.p Stochastic processes < G.3 Probability and Statistics < G Mathematics of Computing"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243364388","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess","uid":"v-tvcg-20243364388","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The need to understand the structure of hierarchical or high-dimensional data is present in a variety of fields. Hyperbolic spaces have proven to be an important tool for embedding computations and analysis tasks as their non-linear nature lends itself well to tree or graph data. Subsequently, they have also been used in the visualization of high-dimensional data, where they exhibit increased embedding performance. However, none of the existing dimensionality reduction methods for embedding into hyperbolic spaces scale well with the size of the input data. That is because the embeddings are computed via iterative optimization schemes and the computation cost of every iteration is quadratic in the size of the input. Furthermore, due to the non-linear nature of hyperbolic spaces, Euclidean acceleration structures cannot directly be translated to the hyperbolic setting. This paper introduces the first acceleration structure for hyperbolic embeddings, building upon a polar quadtree. We compare our approach with existing methods and demonstrate that it computes embeddings of similar quality in significantly less time. Implementation and scripts for the experiments can be found at this https URL.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Martin Skrodzki"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hunter van Geffen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nicolas F. Chaves-de-Plaza"},{"affiliations":"","email":"","is_corresponding":false,"name":"Thomas H\u00f6llt"},{"affiliations":"","email":"","is_corresponding":false,"name":"Elmar Eisemann"},{"affiliations":"","email":"","is_corresponding":false,"name":"Klaus Hildebrandt"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Martin Skrodzki"],"doi":"10.1109/TVCG.2024.3364841","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Quantitative Methods (q-bio.QM); Machine Learning (stat.ML) Dimensionality reduction, t-SNE, hyperbolic embedding, acceleration structure"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243364841","time_end":"","time_stamp":"","time_start":"","title":"Accelerating hyperbolic t-SNE","uid":"v-tvcg-20243364841","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Implicit Neural representations (INRs) are widely used for scientific data reduction and visualization by modeling the function that maps a spatial location to a data value. Without any prior knowledge about the spatial distribution of values, we are forced to sample densely from INRs to perform visualization tasks like iso-surface extraction which can be very computationally expensive. Recently, range analysis has shown promising results in improving the efficiency of geometric queries, such as ray casting and hierarchical mesh extraction, on INRs for 3D geometries by using arithmetic rules to bound the output range of the network within a spatial region. However, the analysis bounds are often too conservative for complex scientific data. In this paper, we present an improved technique for range analysis by revisiting the arithmetic rules and analyzing the probability distribution of the network output within a spatial region. We model this distribution efficiently as a Gaussian distribution by applying the central limit theorem. Excluding low probability values, we are able to tighten the output bounds, resulting in a more accurate estimation of the value range, and hence more accurate identification of iso-surface cells and more efficient iso-surface extraction on INRs. Our approach demonstrates superior performance in terms of the iso-surface extraction time on four datasets compared to the original range analysis method and can also be generalized to other geometric query tasks.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Haoyu Li"},{"affiliations":"","email":"","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haoyu Li"],"doi":"10.1109/TVCG.2024.3365089","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Iso-surface extraction, implicit neural representation, uncertainty propagation, affine arithmetic."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243365089","time_end":"","time_stamp":"","time_start":"","title":"Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation","uid":"v-tvcg-20243365089","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Currently, growing data sources and long-running algorithms impede user attention and interaction with visual analytics applications. Progressive visualization (PV) and visual analytics (PVA) alleviate this problem by allowing immediate feedback and interaction with large datasets and complex computations, avoiding waiting for complete results by using partial results improving with time. Yet, creating a progressive visualization requires more effort than a regular visualization but also opens up new possibilities, such as steering the computations towards more relevant parts of the data, thus saving computational resources. However, there is currently no comprehensive overview of the design space for progressive visualization systems. We surveyed the related work of PV and derived a new taxonomy for progressive visualizations by systematically categorizing all PV publications that included visualizations with progressive features. Progressive visualizations can be categorized by well-known visualization taxonomies, but we also found that progressive visualizations can be distinguished by the way they manage their data processing, data domain, and visual update. Furthermore, we identified key properties such as uncertainty, steering, visual stability, and real-time processing that are significantly different with progressive applications. We also collected evaluation methodologies reported by the publications and conclude with statistical findings, research gaps, and open challenges. A continuously updated visual browser of the survey data is available at visualsurvey.net/pva.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Alex Ulmer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Marco Angelini"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jean-Daniel Fekete"},{"affiliations":"","email":"","is_corresponding":false,"name":"J\u00f6rn Kohlhammerm"},{"affiliations":"","email":"","is_corresponding":false,"name":"Thorsten May"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alex Ulmer"],"doi":"10.1109/TVCG.2023.3346641","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Convergence, Visual analytics, Taxonomy Surveys, Rendering (computer graphics), Task analysis, Progressive Visual Analytics, Progressive Visualization, Taxonomy, State-of-the-Art Report, Survey"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233346641","time_end":"","time_stamp":"","time_start":"","title":"A Survey on Progressive Visualization","uid":"v-tvcg-20233346641","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data visualization and journalism are deeply connected. From early infographics to recent data-driven storytelling, visualization has become an integrated part of contemporary journalism, primarily as a communication artifact to inform the general public. Data journalism, harnessing the power of data visualization, has emerged as a bridge between the growing volume of data and our society. Visualization research that centers around data storytelling has sought to understand and facilitate such journalistic endeavors. However, a recent metamorphosis in journalism has brought broader challenges and opportunities that extend beyond mere communication of data. We present this article to enhance our understanding of such transformations and thus broaden visualization research's scope and practical contribution to this evolving field. We first survey recent significant shifts, emerging challenges, and computational practices in journalism. We then summarize six roles of computing in journalism and their implications. Based on these implications, we provide propositions for visualization research concerning each role. Ultimately, by mapping the roles and propositions onto a proposed ecological model and contextualizing existing visualization research, we surface seven general topics and a series of research agendas that can guide future visualization research at this intersection.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yu Fu"},{"affiliations":"","email":"","is_corresponding":false,"name":"John Stasko"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yu Fu"],"doi":"10.1109/TVCG.2023.3287585","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Computational journalism, data visualization, data-driven storytelling, journalism"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233287585","time_end":"","time_stamp":"","time_start":"","title":"More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism","uid":"v-tvcg-20233287585","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The increasing ubiquity of data in everyday life has elevated the importance of data literacy and accessible data representations, particularly for individuals with disabilities. While prior research predominantly focuses on the needs of the visually impaired, our survey aims to broaden this scope by investigating accessible data representations across a more inclusive spectrum of disabilities. After conducting a systematic review of 152 accessible data representation papers from ACM and IEEE databases, we found that roughly 78% of existing articles center on vision impairments. In this paper, we conduct a comprehensive review of the remaining 22% of papers focused on underrepresented disability communities. We developed categorical dimensions based on accessibility, visualization, and human-computer interaction to classify the papers. These dimensions include the community of focus, issues addressed, contribution type, study methods, participants, data type, visualization type, and data domain. Our work redefines accessible data representations by illustrating their application for disabilities beyond those related to vision. Building on our literature review, we identify and discuss opportunities for future research in accessible data representations. All supplemental materials are available at https://osf.io/ yv4xm/?view only=7b36a3fbf7a14b3888029966faa3def9.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Brianna L. Wimer"},{"affiliations":"","email":"","is_corresponding":false,"name":"Laura South"},{"affiliations":"","email":"","is_corresponding":false,"name":"Keke Wu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Danielle Albers Szafir"},{"affiliations":"","email":"","is_corresponding":false,"name":"Michelle A. Borkin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ronald A. Metoyer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Brianna Wimer"],"doi":"10.1109/TVCG.2024.3356566","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Accessibility, Data Representations."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243356566","time_end":"","time_stamp":"","time_start":"","title":"Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations","uid":"v-tvcg-20243356566","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"A multitude of studies have been conducted on graph drawing, but many existing methods only focus on optimizing a single aesthetic aspect of graph layouts. There are a few existing methods that attempt to develop a flexible solution for optimizing different aesthetic aspects measured by different aesthetic criteria. Furthermore, thanks to the significant advance in deep learning techniques, several deep learning-based layout methods were proposed recently, which have demonstrated the advantages of the deep learning approaches for graph drawing. However, none of these existing methods can be directly applied to optimizing non-differentiable criteria without special accommodation. In this work, we propose a novel Generative Adversarial Network (GAN) based deep learning framework for graph drawing, called SmartGD, which can optimize any quantitative aesthetic goals even though they are non-differentiable. In the cases where the aesthetic goal is too abstract to be described mathematically, SmartGD can draw graphs in a similar style as a collection of good layout examples, which might be selected by humans based on the abstract aesthetic goal. To demonstrate the effectiveness and efficiency of SmartGD, we conduct experiments on minimizing stress, minimizing edge crossing, maximizing crossing angle, and a combination of multiple aesthetics. Compared with several popular graph drawing algorithms, the experimental results show that SmartGD achieves good performance both quantitatively and qualitatively.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Xiaoqi Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kevin Yen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yifan Hu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Han-Wei Shen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xiaoqi Wang"],"doi":"10.1109/TVCG.2023.3306356","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233306356","time_end":"","time_stamp":"","time_start":"","title":"SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals","uid":"v-tvcg-20233306356","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Is it true that if citizens understand hurricane probabilities, they will make more rational decisions for evacuation? Finding answers to such questions is not straightforward in the literature because the terms \u201c judgment \u201d and \u201c decision making \u201d are often used interchangeably. This terminology conflation leads to a lack of clarity on whether people make suboptimal decisions because of inaccurate judgments of information conveyed in visualizations or because they use alternative yet currently unknown heuristics. To decouple judgment from decision making, we review relevant concepts from the literature and present two preregistered experiments (N=601) to investigate if the task (judgment vs. decision making), the scenario (sports vs. humanitarian), and the visualization (quantile dotplots, density plots, probability bars) affect accuracy. While experiment 1 was inconclusive, we found evidence for a difference in experiment 2. Contrary to our expectations and previous research, which found decisions less accurate than their direct-equivalent judgments, our results pointed in the opposite direction. Our findings further revealed that decisions were less vulnerable to status-quo bias, suggesting decision makers may disfavor responses associated with inaction. We also found that both scenario and visualization types can influence people's judgments and decisions. Although effect sizes are not large and results should be interpreted carefully, we conclude that judgments cannot be safely used as proxy tasks for decision making, and discuss implications for visualization research and beyond. Materials and preregistrations are available at https://osf.io/ufzp5/?view_only=adc0f78a23804c31bf7fdd9385cb264f.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Ba\u015fak Oral"},{"affiliations":"","email":"","is_corresponding":false,"name":"Pierre Dragicevic"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alexandru Telea"},{"affiliations":"","email":"","is_corresponding":false,"name":"Evanthia Dimara"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ba\u015fak Oral"],"doi":"10.1109/TVCG.2023.3346640","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Task analysis, Decision making, Visualization, Bars, Sports, Terminology, Cognition, Decision Making, Judgment, Psychology, Visualization"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233346640","time_end":"","time_stamp":"","time_start":"","title":"Decoupling Judgment and Decision Making: A Tale of Two Tails","uid":"v-tvcg-20233346640","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Small multiples are a popular visualization method, displaying different views of a dataset using multiple frames, often with the same scale and axes. However, there is a need to address their potential constraints, especially in the context of human cognitive capacity limits. These limits dictate the maximum information our mind can process at once. We explore the issue of capacity limitation by testing competing theories that describe how the number of frames shown in a display, the scale of the frames, and time constraints impact user performance with small multiples of line charts in an energy grid scenario. In two online studies (Experiment 1 n = 141 and Experiment 2 n = 360) and a follow-up eye-tracking analysis (n=5),we found a linear decline in accuracy with increasing frames across seven tasks, which was not fully explained by differences in frame size, suggesting visual search challenges. Moreover, the studies demonstrate that highlighting specific frames can mitigate some visual search difficulties but, surprisingly, not eliminate them. This research offers insights into optimizing the utility of small multiples by aligning them with human limitations.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Helia Hosseinpour"},{"affiliations":"","email":"","is_corresponding":false,"name":"Laura E. Matzen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kristin M. Divis"},{"affiliations":"","email":"","is_corresponding":false,"name":"Spencer C. Castro"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lace Padilla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Helia Hosseinpour"],"doi":"10.1109/TVCG.2024.3372620","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Cognition, small multiples, time-series data"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243372620","time_end":"","time_stamp":"","time_start":"","title":"Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs","uid":"v-tvcg-20243372620","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Scatterplots provide a visual representation of bivariate data (or 2D embeddings of multivariate data) that allows for effective analyses of data dependencies, clusters, trends, and outliers. Unfortunately, classical scatterplots suffer from scalability issues, since growing data sizes eventually lead to overplotting and visual clutter on a screen with a fixed resolution, which hinders the data analysis process. We propose an algorithm that compensates for irregular sample distributions by a smooth transformation of the scatterplot's visual domain. Our algorithm evaluates the scatterplot's density distribution to compute a regularization mapping based on integral images of the rasterized density function. The mapping preserves the samples' neighborhood relations. Few regularization iterations suffice to achieve a nearly uniform sample distribution that efficiently uses the available screen space. We further propose approaches to visually convey the transformation that was applied to the scatterplot and compare them in a user study. We present a novel parallel algorithm for fast GPU-based integral-image computation, which allows for integrating our de-cluttering approach into interactive visual data analysis systems.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Hennes Rave"},{"affiliations":"","email":"","is_corresponding":false,"name":"Vladimir Molchanov"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lars Linsen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Hennes Rave"],"doi":"10.1109/TVCG.2024.3381453","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243381453","time_end":"","time_stamp":"","time_start":"","title":"De-cluttering Scatterplots with Integral Images","uid":"v-tvcg-20243381453","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Advanced manufacturing creates increasingly complex objects with material compositions that are often difficult to characterize by a single modality. Our domain scientists are going beyond traditional methods by employing both X-ray and neutron computed tomography to obtain complementary representations expected to better resolve material boundaries. However, the use of two modalities creates its own challenges for visualization, requiring either complex adjustments of multimodal transfer functions or the need for multiple views. Together with experts in nondestructive evaluation, we designed a novel interactive multimodal visualization approach to create a combined view of the co-registered X-ray and neutron acquisitions of industrial objects. Using an automatic topological segmentation of the bivariate histogram of X-ray and neutron values as a starting point, the system provides a simple yet effective interface to easily create, explore, and adjust a multimodal isualization. We propose a widget with simple brushing interactions that enables the user to quickly correct the segmented histogram results. Our semiautomated system enables domain experts to intuitively explore large multimodal datasets without the need for either advanced segmentation algorithms or knowledge of visualization echniques. We demonstrate our approach using synthetic examples, industrial phantom objects created to stress multimodal scanning techniques, and real-world objects, and we discuss expert feedback.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Huang, Xuan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Miao, Haichao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kim, Hyojin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Townsend, Andrew"},{"affiliations":"","email":"","is_corresponding":false,"name":"Champley, Kyle"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tringe, Joseph"},{"affiliations":"","email":"","is_corresponding":false,"name":"Pascucci, Valerio"},{"affiliations":"","email":"","is_corresponding":false,"name":"Bremer, Peer-Timo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xuan Huang"],"doi":"10.1109/TVCG.2024.3382607","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243382607","time_end":"","time_stamp":"","time_start":"","title":"Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data","uid":"v-tvcg-20243382607","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualization Recommendation Systems (VRSs) are a novel and challenging field of study aiming to help generate insightful visualizations from data and support non-expert users in information discovery. Among the many contributions proposed in this area, some systems embrace the ambitious objective of imitating human analysts to identify relevant relationships in data and make appropriate design choices to represent these relationships with insightful charts. We denote these systems as \"agnostic\" VRSs since they do not rely on human-provided constraints and rules but try to learn the task autonomously. Despite the high application potential of agnostic VRSs, their progress is hindered by several obstacles, including the absence of standardized datasets to train recommendation algorithms, the difficulty of learning design rules, and defining quantitative criteria for evaluating the perceptual effectiveness of generated plots. This paper summarizes the literature on agnostic VRSs and outlines promising future research directions.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Luca Podo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Bardh Prenkaj"},{"affiliations":"","email":"","is_corresponding":false,"name":"Paola Velardi"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Luca Podo"],"doi":"10.1109/TVCG.2024.3374571","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243374571","time_end":"","time_stamp":"","time_start":"","title":"Agnostic Visual Recommendation Systems: Open Challenges and Future Directions","uid":"v-tvcg-20243374571","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Time-stamped event sequences (TSEQs) are time-oriented data without value information, shifting the focus of users to the exploration of temporal event occurrences. TSEQs exist in application domains, such as sleeping behavior, earthquake aftershocks, and stock market crashes. Domain experts face four challenges, for which they could use interactive and visual data analysis methods. First, TSEQs can be large with respect to both the number of sequences and events, often leading to millions of events. Second, domain experts need validated metrics and features to identify interesting patterns. Third, after identifying interesting patterns, domain experts contextualize the patterns to foster sensemaking. Finally, domain experts seek to reduce data complexity by data simplification and machine learning support. We present IVESA, a visual analytics approach for TSEQs. It supports the analysis of TSEQs at the granularities of sequences and events, supported with metrics and feature analysis tools. IVESA has multiple linked views that support overview, sort+filter, comparison, details-on-demand, and metadata relation-seeking tasks, as well as data simplification through feature analysis, interactive clustering, filtering, and motif detection and simplification. We evaluated IVESA with three case studies and a user study with six domain experts working with six different datasets and applications. Results demonstrate the usability and generalizability of IVESA across applications and cases that had up to 1,000,000 events.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"J\u00fcrgen Bernard"},{"affiliations":"","email":"","is_corresponding":false,"name":"Clara-Maria Barth"},{"affiliations":"","email":"","is_corresponding":false,"name":"Eduard Cuba"},{"affiliations":"","email":"","is_corresponding":false,"name":"Andrea Meier"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yasara Peiris"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ben Shneiderman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["J\u00fcrgen Bernard"],"doi":"10.1109/TVCG.2024.3382760","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Time-Stamped Event Sequences, Time-Oriented Data, Visual Analytics, Data-First Design Study, Iterative Design, Visual Interfaces, User Evaluation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243382760","time_end":"","time_stamp":"","time_start":"","title":"Visual Analysis of Time-Stamped Event Sequences","uid":"v-tvcg-20243382760","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Genomics is at the core of precision medicine, and there are high expectations on genomics-enabled improvement of patient outcomes in the years to come. Around the world, initiatives to increase the use of DNA sequencing in clinical routine are being deployed, such as the use of broad panels in the standard care for oncology patients. Such a development comes at the cost of increased demands on throughput in genomic data analysis. In this paper, we use the task of copy number variant (CNV) analysis as a context for exploring visualization concepts for clinical genomics. CNV calls are generated algorithmically, but time-consuming manual intervention is needed to separate relevant findings from irrelevant ones in the resulting large call candidate lists. We present a visualization environment, named Copycat, to support this review task in a clinical scenario. Key components are a scatter-glyph plot replacing the traditional list visualization, and a glyph representation designed for at-a-glance relevance assessments. Moreover, we present results from a formative evaluation of the prototype by domain specialists, from which we elicit insights to guide both prototype improvements and visualization for clinical genomics in general.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Emilia St\u00e5hlbom"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jesper Molin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Claes Lundstr\u00f6m"},{"affiliations":"","email":"","is_corresponding":false,"name":"Anders Ynnerman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Emilia St\u00e5hlbom"],"doi":"10.1109/TVCG.2024.3385118","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization, genomics, copy number variants, clinical decision support, evaluation"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243385118","time_end":"","time_stamp":"","time_start":"","title":"Visualization for diagnostic review of copy number variants in complex DNA sequencing data","uid":"v-tvcg-20243385118","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This system paper documents the technical foundations for the extension of the Topology ToolKit (TTK) to distributed-memory parallelism with the Message Passing Interface (MPI). While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a versatile approach (supporting both triangulated domains and regular grids) for the support of topological analysis pipelines, i.e. a sequence of topological algorithms interacting together, possibly on distinct numbers of processes. While developing this extension, we faced several algorithmic and software engineering challenges, which we document in this paper. We describe an MPI extension of TTK\u2019s data structure for triangulation representation and traversal, a central component to the global performance and generality of TTK\u2019s topological implementations. We also introduce an intermediate interface between TTK and MPI, both at the global pipeline level, and at the fine-grain algorithmic level. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Detailed performance analyses show that parallel efficiencies range from 20% to 80% (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (120 billion vertices) on a standard cluster with 64 nodes (for a total of 1536 cores). Finally, we provide a roadmap for the completion of TTK\u2019s MPI extension, along with generic recommendations for each algorithm communication category.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"E. Le Guillou"},{"affiliations":"","email":"","is_corresponding":false,"name":"M. Will"},{"affiliations":"","email":"","is_corresponding":false,"name":"P. Guillou"},{"affiliations":"","email":"","is_corresponding":false,"name":"J. Lukasczyk"},{"affiliations":"","email":"","is_corresponding":false,"name":"P. Fortin"},{"affiliations":"","email":"","is_corresponding":false,"name":"C. Garth"},{"affiliations":"","email":"","is_corresponding":false,"name":"J. Tierny"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Julien Tierny"],"doi":"10.1109/TVCG.2024.3390219","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Topological data analysis, high-performance computing, distributed-memory algorithms."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243390219","time_end":"","time_stamp":"","time_start":"","title":"TTK is Getting MPI-Ready","uid":"v-tvcg-20243390219","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The use of natural language interfaces (NLIs) to create charts is becoming increasingly popular due to the intuitiveness of natural language interactions. One key challenge in this approach is to accurately capture user intents and transform them to proper chart specifications. This obstructs the wide use of NLI in chart generation, as users' natural language inputs are generally abstract (i.e., ambiguous or under-specified), without a clear specification of visual encodings. Recently, pre-trained large language models (LLMs) have exhibited superior performance in understanding and generating natural language, demonstrating great potential for downstream tasks. Inspired by this major trend, we propose ChartGPT, generating charts from abstract natural language inputs. However, LLMs are struggling to address complex logic problems. To enable the model to accurately specify the complex parameters and perform operations in chart generation, we decompose the generation process into a step-by-step reasoning pipeline, so that the model only needs to reason a single and specific sub-task during each run. Moreover, LLMs are pre-trained on general datasets, which might be biased for the task of chart generation. To provide adequate visualization knowledge, we create a dataset consisting of abstract utterances and charts and improve model performance through fine-tuning. We further design an interactive interface for ChartGPT that allows users to check and modify the intermediate outputs of each step. The effectiveness of the proposed system is evaluated through quantitative evaluations and a user study.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yuan Tian"},{"affiliations":"","email":"","is_corresponding":false,"name":"Weiwei Cui"},{"affiliations":"","email":"","is_corresponding":false,"name":"Dazhen Deng"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xinjing Yi"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yurun Yang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Haidong Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yingcai Wu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuan Tian"],"doi":"10.1109/TVCG.2024.3368621","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Natural language interfaces, large language models, data visualization"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243368621","time_end":"","time_stamp":"","time_start":"","title":"ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language","uid":"v-tvcg-20243368621","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Qing Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ying Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ruishi Zou"},{"affiliations":"","email":"","is_corresponding":false,"name":"Wei Shuai"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yi Guo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiazhe Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nan Cao"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qing Chen"],"doi":"10.1109/TVCG.2024.3383089","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Representation Learning, Multi-view Visualization, Visual Storytelling, Visualization Embedding"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243383089","time_end":"","time_stamp":"","time_start":"","title":"Chart2Vec: A Universal Embedding of Context-Aware Visualizations","uid":"v-tvcg-20243383089","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The issue of traffic congestion poses a significant obstacle to the development of global cities. One promising solution to tackle this problem is intelligent traffic signal control (TSC). Recently, TSC strategies leveraging reinforcement learning (RL) have garnered attention among researchers. However, the evaluation of these models has primarily relied on fixed metrics like reward and queue length. This limited evaluation approach provides only a narrow view of the model\u2019s decision-making process, impeding its practical implementation. Moreover, effective TSC necessitates coordinated actions across multiple intersections. Existing visual analysis solutions fall short when applied in multi-agent settings. In this study, we delve into the challenge of interpretability in multi-agent reinforcement learning (MARL), particularly within the context of TSC. We propose MARLens, a visual analytics system tailored to understand MARL-based TSC. Our system serves as a versatile platform for both RL and TSC researchers. It empowers them to explore the model\u2019s features from various perspectives, revealing its decision-making processes and shedding light on interactions among different agents. To facilitate quick identification of critical states, we have devised multiple visualization views, complemented by a traffic simulation module that allows users to replay specific training scenarios. To validate the utility of our proposed system, we present three comprehensive case studies, incorporate insights from domain experts through interviews, and conduct a user study. These collective efforts underscore the feasibility and effectiveness of MARLens in enhancing our understanding of MARL-based TSC systems and pave the way for more informed and efficient traffic management strategies.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Yutian Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Guohong Zheng"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zhiyuan Liu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Quan Li"},{"affiliations":"","email":"","is_corresponding":true,"name":"Haipeng Zeng"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Haipeng Zeng"],"doi":"10.1109/TVCG.2024.3392587","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Traffic signal control, multi-agent, reinforcement learning, visual analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243392587","time_end":"","time_stamp":"","time_start":"","title":"MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics","uid":"v-tvcg-20243392587","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Areas of interest (AOIs) are well-established means of providing semantic information for visualizing, analyzing, and classifying gaze data. However, the usual manual annotation of AOIs is time consuming and further impaired by ambiguities in label assignments. To address these issues, we present an interactive labeling approach that combines visualization, machine learning, and user-centered explainable annotation. Our system provides uncertainty-aware visualization to build trust in classification with an increasing number of annotated examples. It combines specifically designed EyeFlower glyphs, dimensionality reduction, and selection and exploration techniques in an integrated workflow. The approach is versatile and hardware-agnostic, supporting video stimuli from stationary and unconstrained mobile eye trackin alike. We conducted an expert review to assess labeling strategies and trust building.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Maurice Koch"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nan Cao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Daniel Weiskopf"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kuno Kurzhals"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Maurice Koch"],"doi":"10.1109/TVCG.2024.3392476","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visual analytics, eye tracking, uncertainty, active learning, trust building"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243392476","time_end":"","time_stamp":"","time_start":"","title":"Active Gaze Labeling: Visualization for Trust Building","uid":"v-tvcg-20243392476","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Dimensionality reduction (DR) algorithms are diverse and widely used for analyzing high-dimensional data. Various metrics and tools have been proposed to evaluate and interpret the DR results. However, most metrics and methods fail to be well generalized to measure any DR results from the perspective of original distribution fidelity or lack interactive exploration of DR results. There is still a need for more intuitive and quantitative analysis to interactively explore high-dimensional data and improve interpretability. We propose a metric and a generalized algorithm-agnostic approach based on the concept of capacity to evaluate and analyze the DR results. Based on our approach, we develop a visual analytic system HiLow for exploring high-dimensional data and projections. We also propose a mixed-initiative recommendation algorithm that assists users in interactively DR results manipulation. Users can compare the differences in data distribution after the interaction through HiLow. Furthermore, we propose a novel visualization design focusing on quantitative analysis of differences between high and low-dimensional data distributions. Finally, through user study and case studies, we validate the effectiveness of our approach and system in enhancing the interpretability of projections and analyzing the distribution of high and low-dimensional data.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Yang Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jisheng Liu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chufan Lai"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yuan Zhou"},{"affiliations":"","email":"","is_corresponding":true,"name":"Siming Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Siming Chen"],"doi":"10.1109/TVCG.2023.3324851","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233324851","time_end":"","time_stamp":"","time_start":"","title":"Interpreting High-Dimensional Projections With Capacity","uid":"v-tvcg-20233324851","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The fund investment industry heavily relies on the expertise of fund managers, who bear the responsibility of managing portfolios on behalf of clients. With their investment knowledge and professional skills, fund managers gain a competitive advantage over the average investor in the market. Consequently, investors prefer entrusting their investments to fund managers rather than directly investing in funds. For these investors, the primary concern is selecting a suitable fund manager. While previous studies have employed quantitative or qualitative methods to analyze various aspects of fund managers, such as performance metrics, personal characteristics, and performance persistence, they often face challenges when dealing with a large candidate space. Moreover, distinguishing whether a fund manager's performance stems from skill or luck poses a challenge, making it difficult to align with investors' preferences in the selection process. To address these challenges, this study characterizes the requirements of investors in selecting suitable fund managers and proposes an interactive visual analytics system called FMLens. This system streamlines the fund manager selection process, allowing investors to efficiently assess and deconstruct fund managers' investment styles and abilities across multiple dimensions. Additionally, the system empowers investors to scrutinize and compare fund managers' performances. The effectiveness of the approach is demonstrated through two case studies and a qualitative user study. Feedback from domain experts indicates that the system excels in analyzing fund managers from diverse perspectives, enhancing the efficiency of fund manager evaluation and selection.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Longfei Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chen Cheng"},{"affiliations":"","email":"","is_corresponding":false,"name":"He Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xiyuan Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yun Tian"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xuanwu Yue"},{"affiliations":"","email":"","is_corresponding":false,"name":"Wong Kam-Kwai"},{"affiliations":"","email":"","is_corresponding":false,"name":"Haipeng Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Suting Hong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Quan Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Longfei Chen"],"doi":"10.1109/TVCG.2024.3394745","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Financial Data, Fund Manager Selection, Visual Analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243394745","time_end":"","time_stamp":"","time_start":"","title":"FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments","uid":"v-tvcg-20243394745","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This article explores how the ability to recall information in data visualizations depends on the presentation technology. Participants viewed 10 Isotype visualizations on a 2D screen, in 3D, in Virtual Reality (VR) and in Mixed Reality (MR). To provide a fair comparison between the three 3D conditions, we used LIDAR to capture the details of the physical rooms, and used this information to create our textured 3D models. For all environments, we measured the number of visualizations recalled and their order (2D) or spatial location (3D, VR, MR). We also measured the number of syntactic and semantic features recalled. Results of our study show increased recall and greater richness of data understanding in the MR condition. Not only did participants recall more visualizations and ordinal/spatial positions in MR, but they also remembered more details about graph axes and data mappings, and more information about the shape of the data. We discuss how differences in the spatial and kinesthetic cues provided in these different environments could contribute to these results, and reasons why we did not observe comparable performance in the 3D and VR conditions.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Christophe Hurter"},{"affiliations":"","email":"","is_corresponding":false,"name":"Bernice Rogowitz"},{"affiliations":"","email":"","is_corresponding":false,"name":"Guillaume Truong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tiffany Andry"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hugo Romat"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ludovic Gardy"},{"affiliations":"","email":"","is_corresponding":false,"name":"Fereshteh Amini"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nathalie Henry Riche"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Christophe Hurter"],"doi":"10.1109/TVCG.2023.3336588","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Three-dimensional displays, Virtual reality, Mixed reality, Electronic mail, Syntactics, Semantics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233336588","time_end":"","time_stamp":"","time_start":"","title":"Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D","uid":"v-tvcg-20233336588","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"With the rise of short-form video platforms and the increasing availability of data, we see the potential for people to share short-form videos embedded with data in situ (e.g., daily steps when running) to increase the credibility and expressiveness of their stories. However, creating and sharing such videos in situ is challenging since it involves multiple steps and skills (e.g., data visualization creation and video editing), especially for amateurs. By conducting a formative study (N=10) using three design probes, we collected the motivations and design requirements. We then built VisTellAR, a mobile AR authoring tool, to help amateur video creators embed data visualizations in short-form videos in situ. A two-day user study shows that participants (N=12) successfully created various videos with data visualizations in situ and they confirmed the ease of use and learning. AR pre-stage authoring was useful to assist people in setting up data visualizations in reality with more designs in camera movements and interaction with gestures and physical objects to storytelling.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Wai Tong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kento Shigyo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lin-Ping Yuan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Mingming Fan"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ting-Chuen Pong"},{"affiliations":"","email":"","is_corresponding":false,"name":"Huamin Qu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Meng Xia"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Wai Tong"],"doi":"10.1109/TVCG.2024.3372104","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Personal data, augmented reality, data visualization, storytelling, short-form video"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243372104","time_end":"","time_stamp":"","time_start":"","title":"VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality","uid":"v-tvcg-20243372104","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The process of labeling medical text plays a crucial role in medical research. Nonetheless, creating accurately labeled medical texts of high quality is often a time-consuming task that requires specialized domain knowledge. Traditional methods for generating labeled data typically rely on rigid rule-based approaches, which may not adapt well to new tasks. While recent machine learning (ML) methodologies have mitigated the manual labeling efforts, configuring models to align with specific research requirements can be challenging for labelers without technical expertise. Moreover, automated labeling techniques, such as transfer learning, face difficulties in in directly incorporating expert input, whereas semi-automated methods, like data programming, allow knowledge integration through rules or knowledge bases but may lack continuous result refinement throughout the entire labeling process. In this study, we present a collaborative human-ML teaming workflow that seamlessly integrates visual cluster analysis and active learning to assist domain experts in labeling medical text with high efficiency. Additionally, we introduce an innovative neural network model called the embedding network, which incorporates expert insights to generate task-specific embeddings for medical texts. We integrate the workflow and embedding network into a visual analytics tool named KMTLabeler, equipped with coordinated multi-level views and interactions. Two illustrative case studies, along with a controlled user study, provide substantial evidence of the effectiveness of KMTLabeler in creating an efficient labeling environment for medical text classification.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"He Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yang Ouyang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yuchen Wu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chang Jiang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lixia Jin"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yuanwu Cao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Quan Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["He Wang"],"doi":"10.1109/TVCG.2024.3406387","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Medical Text Labeling, Expert Knowledge, Embedding Network, Visual Cluster Analysis, Active Learning"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243406387","time_end":"","time_stamp":"","time_start":"","title":"KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification","uid":"v-tvcg-20243406387","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present a novel method for the interactive construction and rendering of extremely large molecular scenes, capable of representing multiple biological cells in atomistic detail. Our method is tailored for scenes, which are procedurally constructed, based on a given set of building rules. Rendering of large scenes normally requires the entire scene available in-core, or alternatively, it requires out-of-core management to load data into the memory hierarchy as a part of the rendering loop. Instead of out-of-core memory management, we propose to procedurally generate the scene on-demand on the fly. The key idea is a positional- and view-dependent procedural scene-construction strategy, where only a fraction of the atomistic scene around the camera is available in the GPU memory at any given time. The atomistic detail is populated into a uniform-space partitioning using a grid that covers the entire scene. Most of the grid cells are not filled with geometry, only those are populated that are potentially seen by the camera. The atomistic detail is populated in a compute shader and its representation is connected with acceleration data structures for hardware ray-tracing of modern GPUs. Objects which are far away, where atomistic detail is not perceivable from a given viewpoint, are represented by a triangle mesh mapped with a seamless texture, generated from the rendering of geometry from atomistic detail. The algorithm consists of two pipelines, the construction-compute pipeline, and the rendering pipeline, which work together to render molecular scenes at an atomistic resolution far beyond the limit of the GPU memory containing trillions of atoms. We demonstrate our technique on multiple models of SARS-CoV-2 and the red blood cell.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Ruwayda Alharbi"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ond\u02c7rej Strnad"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tobias Klein"},{"affiliations":"","email":"","is_corresponding":false,"name":"Ivan Viola"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ruwayda Alharbi"],"doi":"10.1109/TVCG.2024.3411786","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Interactive rendering, view-guided scene construction, biological data, hardware ray tracing"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243411786","time_end":"","time_stamp":"","time_start":"","title":"\u201cNanomatrix: Scalable Construction of Crowded Biological Environments\u201d","uid":"v-tvcg-20243411786","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Generative text-to-image models, which allow users to create appealing images through a text prompt, have seen a dramatic increase in popularity in recent years. However, most users have a limited understanding of how such models work and often rely on trial and error strategies to achieve satisfactory results. The prompt history contains a wealth of information that could provide users with insights into what has been explored and how the prompt changes impact the output image, yet little research attention has been paid to the visual analysis of such process to support users. We propose the Image Variant Graph, a novel visual representation designed to support comparing prompt-image pairs and exploring the editing history. The Image Variant Graph models prompt differences as edges between corresponding images and presents the distances between images through projection. Based on the graph, we developed the PrompTHis system through co-design with artists. Based on the review and analysis of the prompting history, users can better understand the impact of prompt changes and have a more effective control of image generation. A quantitative user study and qualitative interviews demonstrate that PrompTHis can help users review the prompt history, make sense of the model, and plan their creative process.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yuhan Guo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Hanning Shao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Can Liu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kai Xu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xiaoru Yuan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuhan Guo"],"doi":"10.1109/TVCG.2024.3408255","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Text visualization, image visualization, text-to-image generation, editing history, provenance, generative art"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243408255","time_end":"","time_stamp":"","time_start":"","title":"PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation","uid":"v-tvcg-20243408255","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Information visualization uses various types of representations to encode data into graphical formats. Prior work on visualization techniques has evaluated the accuracy of perceived numerical data values from visual data encodings such as graphical position, length, orientation, size, and color. Our work aims to extend the research of graphical perception to the use of motion as data encodings for quantitative values. We present two experiments implementing multiple fundamental aspects of motion such as type, speed, and synchronicity that can be used for numerical value encoding as well as comparing motion to static visual encodings in terms of user perception and accuracy. We studied how well users can assess the differences between several types of motion and static visual encodings and present an updated ranking of accuracy for quantitative judgments. Our results indicate that non-synchronized motion can be interpreted more quickly and more accurately than synchronized motion. Moreover, our ranking of static and motion visual representations shows that motion, especially expansion and translational types, has great potential as a data encoding technique for quantitative value. Finally, we discuss the implications for the use of animation and motion for numerical representations in data visualization.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Shaghayegh Esmaeili"},{"affiliations":"","email":"","is_corresponding":false,"name":"Samia Kabir"},{"affiliations":"","email":"","is_corresponding":false,"name":"Anthony M. Colas"},{"affiliations":"","email":"","is_corresponding":false,"name":"Rhema P. Linder"},{"affiliations":"","email":"","is_corresponding":false,"name":"Eric D. Ragan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shaghayegh Esmaeili"],"doi":"10.1109/TVCG.2022.3193756","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Information visualization, animation and motion-related techniques, empirical study, graphical perception, evaluation."],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20223193756","time_end":"","time_stamp":"","time_start":"","title":"Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding","uid":"v-tvcg-20223193756","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Point clouds are widely used as a versatile representation of 3D entities and scenes for all scale domains and in a variety of application areas, serving as a fundamental data category to directly convey spatial features. However, due to point sparsity, lack of structure, irregular distribution, and acquisition-related inaccuracies, results of point cloud visualization are often subject to visual complexity and ambiguity. In this regard, non-photorealistic rendering can improve visual communication by reducing the cognitive effort required to understand an image or scene and by directing attention to important features. In the last 20 years, this has been demonstrated by various non-photorealistic r rendering approaches that were proposed to target point clouds specifically. However, they do not use a common language or structure for assessment which complicates comparison and selection. Further, recent developments regarding point cloud characteristics and processing, such as massive data size or web-based rendering are rarely considered. To address these issues, we present a survey on non-photorealistic rendering approaches for point cloud visualization, providing an overview of the current state of research. We derive a structure for the assessment of approaches, proposing seven primary dimensions for the categorization regarding intended goals, data requirements, used techniques, and mode of operation. We then systematically assess corresponding approaches and utilize this classification to identify trends and research gaps, motivating future research in the development of effective non-photorealistic point cloud rendering methods.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Ole Wegen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Willy Scheibel"},{"affiliations":"","email":"","is_corresponding":false,"name":"Matthias Trapp"},{"affiliations":"","email":"","is_corresponding":false,"name":"Rico Richter"},{"affiliations":"","email":"","is_corresponding":false,"name":"J\u00fcrgen D\u00f6llner"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ole Wegen"],"doi":"10.1109/TVCG.2024.3402610","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Point clouds, survey, non-photorealistic rendering"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243402610","time_end":"","time_stamp":"","time_start":"","title":"A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization","uid":"v-tvcg-20243402610","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"With the growing complexity and volume of data, visualizations have become more intricate, often requiring advanced techniques to convey insights. These complex charts are prevalent in everyday life, and individuals who lack knowledge in data visualization may find them challenging to understand. This paper investigates using Large Language Models (LLMs) to help users with low data literacy understand complex visualizations. While previous studies focus on text interactions with users, we noticed that visual cues are also critical for interpreting charts. We introduce an LLM application that supports both text and visual interaction for guiding chart interpretation. Our study with 26 participants revealed that the in-situ support effectively assisted users in interpreting charts and enhanced learning by addressing specific chart-related questions and encouraging further exploration. Visual communication allowed participants to convey their interests straightforwardly, eliminating the need for textual descriptions. However, the LLM assistance led users to engage less with the system, resulting in fewer insights from the visualizations. This suggests that users, particularly those with lower data literacy and motivation, may have over-relied on the LLM agent. We discuss opportunities for deploying LLMs to enhance visualization literacy while emphasizing the need for a balanced approach.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Kiroong Choe"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chaerin Lee"},{"affiliations":"","email":"","is_corresponding":false,"name":"Soohyun Lee"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiwon Song"},{"affiliations":"","email":"","is_corresponding":false,"name":"Aeri Cho"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nam Wook Kim"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jinwook Seo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kiroong Choe"],"doi":"10.1109/TVCG.2024.3413195","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Visualization literacy, Large language model, Visual communication"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243413195","time_end":"","time_stamp":"","time_start":"","title":"Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation","uid":"v-tvcg-20243413195","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data charts are prevalent across various fields due to their efficacy in conveying complex data relationships. However, static charts may sometimes struggle to engage readers and efficiently present intricate information, potentially resulting in limited understanding. We introduce \u201cLive Charts,\u201d a new format of presentation that decomposes complex information within a chart and explains the information pieces sequentially through rich animations and accompanying audio narration. We propose an automated approach to revive static charts into Live Charts. Our method integrates GNN-based techniques to analyze the chart components and extract data from charts. Then we adopt large natural language models to generate appropriate animated visuals along with a voice-over to produce Live Charts from static ones. We conducted a thorough evaluation of our approach, which involved the model performance, use cases, a crowd-sourced user study, and expert interviews. The results demonstrate Live Charts offer a multi-sensory experience where readers can follow the information and understand the data insights better. We analyze the benefits and drawbacks of Live Charts over static charts as a new information consumption experience.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Velitchko Filipov"},{"affiliations":"","email":"","is_corresponding":false,"name":"Alessio Arleo"},{"affiliations":"","email":"","is_corresponding":false,"name":"Markus B\u00f6gl"},{"affiliations":"","email":"","is_corresponding":false,"name":"Silvia Miksch"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lu Ying"],"doi":"10.1109/TVCG.2024.3397004","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Charts, storytelling, machine learning, automatic visualization"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243397004","time_end":"","time_stamp":"","time_start":"","title":"Reviving Static Charts into Live Charts","uid":"v-tvcg-20243397004","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualizing event timelines for collaborative text writing is an important application for navigating and understanding such data, as time passes and the size and complexity of both text and timeline increase. They are often employed by applications such as code repositories and collaborative text editors. In this paper, we present a visualization tool to explore historical records of writing of legislative texts, which were discussed and voted on by an assembly of representatives. Our visualization focuses on event timelines from text documents that involve multiple people and different topics, allowing for observation of different proposed versions of said text or tracking data provenance of given text sections, while highlighting the connections between all elements involved. We also describe the process of designing such a tool alongside domain experts, with three steps of evaluation being conducted to verify the effectiveness of our design.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Gabriel D. Cantareira"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yiwen Xing"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nicholas Cole"},{"affiliations":"","email":"","is_corresponding":false,"name":"Rita Borgo"},{"affiliations":"","email":"","is_corresponding":true,"name":"Alfie Abdul-Rahman"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alfie Abdul-Rahman"],"doi":"10.1109/TVCG.2024.3376406","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data visualization, Collaboration, History, Humanities, Writing, Navigation, Metadata"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243376406","time_end":"","time_stamp":"","time_start":"","title":"Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records","uid":"v-tvcg-20243376406","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Creating an animated data video with audio narration is a time-consuming and complex task that requires expertise. It involves designing complex animations, turning written scripts into audio narrations, and synchronizing visual changes with the narrations. This paper presents WonderFlow, an interactive authoring tool, that facilitates narration-centric design of animated data videos. WonderFlow allows authors to easily specify semantic links between text and the corresponding chart elements. Then it automatically generates audio narration by leveraging text-to-speech techniques and aligns the narration with an animation. WonderFlow provides a structure-aware animation library designed to ease chart animation creation, enabling authors to apply pre-designed animation effects to common visualization components. Additionally, authors can preview and refine their data videos within the same system, without having to switch between different creation tools. A series of evaluation results confirmed that WonderFlow is easy to use and simplifies the creation of data videos with narration-animation interplay.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":false,"name":"Yun Wang"},{"affiliations":"","email":"","is_corresponding":true,"name":"Leixian Shen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zhengxin You"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xinhuan Shu"},{"affiliations":"","email":"","is_corresponding":false,"name":"Bongshin Lee"},{"affiliations":"","email":"","is_corresponding":false,"name":"John Thompson"},{"affiliations":"","email":"","is_corresponding":false,"name":"Haidong Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Dongmei Zhang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leixian Shen"],"doi":"10.1109/TVCG.2024.3411575","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data video, Data visualization, Narration-animation interplay, Storytelling, Authoring tool"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243411575","time_end":"","time_stamp":"","time_start":"","title":"WonderFlow: Narration-Centric Design of Animated Data Videos","uid":"v-tvcg-20243411575","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"As urban populations grow, effectively accessing urban performance measures such as livability and comfort becomes increasingly important due to their significant socioeconomic impacts. While Point of Interest (POI) data has been utilized for various applications in location-based services, its potential for urban performance analytics remains unexplored. In this paper, we present SenseMap, a novel approach for analyzing urban performance by leveraging POI data as a semantic representation of urban functions. We quantify the contribution of POIs to different urban performance measures by calculating semantic textual similarities on our constructed corpus. We propose Semantic-adaptive Kernel Density Estimation which takes into account POIs\u2019 in\ufb02uential areas across different Traf\ufb01c Analysis Zones and semantic contributions to generate semantic density maps for measures. We design and implement a feature-rich, real-time visual analytics system for users to explore the urban performance of their surroundings. Evaluations with human judgment and reference data demonstrate the feasibility and validity of our method. Usage scenarios and user studies demonstrate the capability, usability, and explainability of our system.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Juntong Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Qiaoyun Huang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Changbo Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Chenhui Li"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Juntong Chen"],"doi":"10.1109/TVCG.2023.3333356","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Urban data, semantic textual similarity, point of interest, density map, visual analytics, visualization design"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233333356","time_end":"","time_stamp":"","time_start":"","title":"SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity","uid":"v-tvcg-20233333356","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Impact dynamics are crucial for estimating the growth patterns of NFT projects by tracking the diffusion and decay of their relative appeal among stakeholders. Machine learning methods for impact dynamics analysis are incomprehensible and rigid in terms of their interpretability and transparency, whilst stakeholders require interactive tools for informed decision-making. Nevertheless, developing such a tool is challenging due to the substantial, heterogeneous NFT transaction data and the requirements for flexible, customized interactions. To this end, we integrate intuitive visualizations to unveil the impact dynamics of NFT projects. We first conduct a formative study and summarize analysis criteria, including substitution mechanisms, impact attributes, and design requirements from stakeholders. Next, we propose the Minimal Substitution Model to simulate substitutive systems of NFT projects that can be feasibly represented as node-link graphs. Particularly, we utilize attribute-aware techniques to embed the project status and stakeholder behaviors in the layout design. Accordingly, we develop a multi-view visual analytics system, namely NFTracer, allowing interactive analysis of impact dynamics in NFT transactions. We demonstrate the informativeness, effectiveness, and usability of NFTracer by performing two case studies with domain experts and one user study with stakeholders. The studies suggest that NFT projects featuring a higher degree of similarity are more likely to substitute each other. The impact of NFT projects within substitutive systems is contingent upon the degree of stakeholders\u2019 influx and projects\u2019 freshness.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yifan Cao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Qing Shi"},{"affiliations":"","email":"","is_corresponding":false,"name":"Lucas Shen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Kani Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yang Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Wei Zeng"},{"affiliations":"","email":"","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yifan Cao"],"doi":"10.1109/TVCG.2024.3402834","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Stakeholders, Nonfungible Tokens, Social Networking Online, Visual Analytics, Network Analyzers, Measurement, Layout, Impact Dynamics Analysis, Non Fungible Tokens NF Ts, NFT Transaction Data, Substitutive Systems, Visual Analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243402834","time_end":"","time_stamp":"","time_start":"","title":"Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics","uid":"v-tvcg-20243402834","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visual analytics supports data analysis tasks within complex domain problems. However, due to the richness of data types, visual designs, and interaction designs, users need to recall and process a significant amount of information when they visually analyze data. These challenges emphasize the need for more intelligent visual analytics methods. Large language models have demonstrated the ability to interpret various forms of textual data, offering the potential to facilitate intelligent support for visual analytics. We propose LEVA, a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. To support onboarding, we use large language models to interpret visualization designs and view relationships based on system specifications. For exploration, we use large language models to recommend insights based on the analysis of system status and data to facilitate mixed-initiative exploration. For summarization, we present a selective reporting strategy to retrace analysis history through a stream visualization and generate insight reports with the help of large language models. We demonstrate how LEVA can be integrated into existing visual analytics systems. Two usage scenarios and a user study suggest that LEVA effectively aids users in conducting visual analytics.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Yuheng Zhao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yixing Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Yu Zhang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Xinyi Zhao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Junjie Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Zekai Shao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Cagatay Turkay"},{"affiliations":"","email":"","is_corresponding":false,"name":"Siming Chen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuheng Zhao"],"doi":"10.1109/TVCG.2024.3368060","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Insight recommendation, mixed-initiative, interface agent, large language models, visual analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20243368060","time_end":"","time_stamp":"","time_start":"","title":"LEVA: Using Large Language Models to Enhance Visual Analytics","uid":"v-tvcg-20243368060","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present V-Mail, a framework of cross-platform applications, interactive techniques, and communication protocols for improved multi-person correspondence about spatial 3D datasets. Inspired by the daily use of e-mail, V-Mail seeks to enable a similar style of rapid, multi-person communication accessible on any device; however, it aims to do this in the new context of spatial 3D communication, where limited access to 3D graphics hardware typically prevents such communication. The approach integrates visual data storytelling with data exploration, spatial annotations, and animated transitions. V-Mail ``data stories'' are exported in a standard video file format to establish a common baseline level of access on (almost) any device. The V-Mail framework also includes a series of complementary client applications and plugins that enable different degrees of story co-authoring and data exploration, adjusted automatically to match the capabilities of various devices. A lightweight, phone-based V-Mail app makes it possible to annotate data by adding captions to the video. These spatial annotations are then immediately accessible to team members running high-end 3D graphics visualization systems that also include a V-Mail client, implemented as a plugin. Results and evaluation from applying V-Mail to assist communication within an interdisciplinary science team studying Antarctic ice sheets confirm the utility of the asynchronous, cross-platform collaborative framework while also highlighting some current limitations and opportunities for future work.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Jung Who Nam"},{"affiliations":"","email":"","is_corresponding":false,"name":"Tobias Isenberg"},{"affiliations":"","email":"","is_corresponding":false,"name":"Daniel F. Keefe"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jung Who Nam"],"doi":"10.1109/TVCG.2022.3229017","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Human-computer interaction, visualization of scientific 3D data, communication, storytelling, immersive analytics"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20223229017","time_end":"","time_stamp":"","time_start":"","title":"V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices","uid":"v-tvcg-20223229017","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In recent years, narrative visualization has gained much attention. Researchers have proposed different design spaces for various narrative visualization genres and scenarios to facilitate the creation process. As users' needs grow and automation technologies advance, increasingly more tools have been designed and developed. In this study, we summarized six genres of narrative visualization (annotated charts, infographics, timelines & storylines, data comics, scrollytelling & slideshow, and data videos) based on previous research and four types of tools (design spaces, authoring tools, ML/AI-supported tools and ML/AI-generator tools) based on the intelligence and automation level of the tools. We surveyed 105 papers and tools to study how automation can progressively engage in visualization design and narrative processes to help users easily create narrative visualizations. This research aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.","accessible_pdf":false,"authors":[{"affiliations":"","email":"","is_corresponding":true,"name":"Qing Chen"},{"affiliations":"","email":"","is_corresponding":false,"name":"Shixiong Cao"},{"affiliations":"","email":"","is_corresponding":false,"name":"Jiazhe Wang"},{"affiliations":"","email":"","is_corresponding":false,"name":"Nan Cao"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Qing Chen"],"doi":"10.1109/TVCG.2023.3261320","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":["Data Visualization, Automatic Visualization, Narrative Visualization, Design Space, Authoring Tools, Survey"],"open_access":false,"paper_award":"","paper_type":"full","presentation_mode":"","session_id":"tvcg0","slot_id":"v-tvcg-20233261320","time_end":"","time_stamp":"","time_start":"","title":"How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools","uid":"v-tvcg-20233261320","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"TVCG","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"v-vr":{"event":"VR Invited Partnership Presentations","event_description":"","event_prefix":"v-vr","event_type":"invited","event_url":"","long_name":"VR Invited Partnership Presentations","organizers":[],"sessions":[]},"w-accessible":{"event":"1st Workshop on Accessible Data Visualization","event_description":"","event_prefix":"w-accessible","event_type":"workshop","event_url":"","long_name":"1st Workshop on Accessible Data Visualization","organizers":[],"sessions":[]},"w-beliv":{"event":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","event_description":"","event_prefix":"w-beliv","event_type":"workshop","event_url":"","long_name":"BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"w-beliv","ff_link":"","session_id":"w-beliv0","session_image":"w-beliv0.png","time_end":"","time_slots":[{"abstract":"I analyze the evolution of papers certified by the Graphics Replicability Stamp Initiative (GRSI) to be reproducible, with a specific focus on the subset of publications that address visualization-related topics. With this analysis I show that, while the number of papers is increasing overall and within the visualization field, we still have to improve quite a bit to escape the replication crisis. I base my analysis on the data published by the GRSI as well as publication data for the different venues in visualization and lists of journal papers that have been presented at visualization-focused conferences. I also analyze the differences between the involved journals as well as the percentage of reproducible papers in the different presentation venues. Furthermore, I look at the authors of the publications and, in particular, their affiliation countries to see where most reproducible papers come from. Finally, I discuss potential reasons for the low reproducibility numbers and suggest possible ways to overcome these obstacles. This paper is reproducible itself, with source code and data available from github.com/tobiasisenberg/Visualization-Reproducibility as well as a free paper copy and all supplemental materials at osf.io/mvnbj.","accessible_pdf":false,"authors":[{"affiliations":["Universit\u00e9 Paris-Saclay, CNRS, Orsay, France","Inria, Saclay, France"],"email":"tobias.isenberg@gmail.com","is_corresponding":true,"name":"Tobias Isenberg"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tobias Isenberg"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1001","time_end":"","time_stamp":"","time_start":"","title":"The State of Reproducibility Stamps for Visualization Research Papers","uid":"w-beliv-1001","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In the rapidly evolving field of information visualization, rigorous evaluation is essential for validating new techniques, understanding user interactions, and demonstrating the effectiveness of visualizations. The evaluation of visualization systems is fundamental to ensuring their effectiveness, usability, and impact. Faithful evaluations provide valuable insights into how users interact with and perceive the system, enabling designers to make informed decisions about design choices and improvements. However, an emerging trend of multiple evaluations within a single study raises critical questions about the sustainability, feasibility, and methodological rigor of such an approach. So, how many evaluations are enough? is a situational question and cannot be formulaically determined. Our objective is to summarize current trends and patterns to understand general practices across different contribution and evaluation types. New researchers and students, influenced by this trend, may believe-- multiple evaluations are necessary for a study. However, the number of evaluations in a study should depend on its contributions and merits, not on the trend of including multiple evaluations to strengthen a paper. In this position paper, we identify this trend through a non-exhaustive literature survey of TVCG papers from issue 1 in 2023 and 2024. We then discuss various evaluation strategy patterns in the information visualization field and how this paper will open avenues for further discussion.","accessible_pdf":false,"authors":[{"affiliations":["University of North Carolina at Chapel Hill, Chapel Hill, United States"],"email":"flin@unc.edu","is_corresponding":false,"name":"Feng Lin"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"zeyuwang@cs.unc.edu","is_corresponding":false,"name":"Arran Zeyu Wang"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"dilshadur@sci.utah.edu","is_corresponding":false,"name":"Md Dilshadur Rahman"},{"affiliations":["University of North Carolina-Chapel Hill, Chapel Hill, United States"],"email":"danielle.szafir@cs.unc.edu","is_corresponding":false,"name":"Danielle Albers Szafir"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"quadri@ou.edu","is_corresponding":true,"name":"Ghulam Jilani Quadri"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Ghulam Jilani Quadri"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1004","time_end":"","time_stamp":"","time_start":"","title":"How Many Evaluations are Enough? A Position Paper on Evaluation Trend in Information Visualization","uid":"w-beliv-1004","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Various standardized tests exist that assess individuals' visualization literacy. Their use can help to draw conclusions from studies. However, it is not taken into account that the test itself can create a pressure situation where participants might fear being exposed and assessed negatively. This is especially problematic when testing domain experts in design studies. We conducted interviews with experts from different domains performing the Mini-VLAT test for visualization literacy to identify potential problems. Our participants reported that the time limit per question, ambiguities in the questions and visualizations, and missing steps in the test procedure mainly had an impact on their performance and content. We discuss possible changes to the test design to address these issues and how such assessment methods could be integrated into existing evaluation procedures.","accessible_pdf":false,"authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"seyda.oeney@visus.uni-stuttgart.de","is_corresponding":true,"name":"Seyda \u00d6ney"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"moataz.abdelaal@visus.uni-stuttgart.de","is_corresponding":false,"name":"Moataz Abdelaal"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"kuno.kurzhals@visus.uni-stuttgart.de","is_corresponding":false,"name":"Kuno Kurzhals"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"paul.betz@sowi.uni-stuttgart.de","is_corresponding":false,"name":"Paul Betz"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"cordula.kropp@sowi.uni-stuttgart.de","is_corresponding":false,"name":"Cordula Kropp"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Seyda \u00d6ney"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1005","time_end":"","time_stamp":"","time_start":"","title":"Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts","uid":"w-beliv-1005","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In visualization, the process of transforming raw data into visually comprehensible representations is pivotal. While existing models like the Information Visualization Reference Model describe the data-to-visual mapping process, they often overlook a crucial intermediary step: design-specific transformations. This process, occurring after data transformation but before visual-data mapping, further derives data, such as groupings, layout, and statistics, that are essential to properly render the visualization. In this paper, we advocate for a deeper exploration of design-specific transformations, highlighting their importance in understanding visualization properties, particularly in relation to user tasks. We incorporate design-specific transformations into the Information Visualization Reference Model and propose a new formalism that encompasses the user task as a function over data. The resulting formalism offers three key benefits over existing visualization models: (1) describing tasks as compositions of functions, (2) enabling analysis of data transformations for visual-data mapping, and (3) empowering reasoning about visualization correctness and effectiveness. We further discuss the potential implications of this model on visualization theory and visualization experiment design.","accessible_pdf":false,"authors":[{"affiliations":["Columbia University, New York City, United States"],"email":"ewu@cs.columbia.edu","is_corresponding":true,"name":"eugene Wu"},{"affiliations":["Tufts University, Medford, United States"],"email":"remco@cs.tufts.edu","is_corresponding":false,"name":"Remco Chang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["eugene Wu"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1007","time_end":"","time_stamp":"","time_start":"","title":"Design-Specific Transforms In Visualization","uid":"w-beliv-1007","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Stress is among the most commonly employed quality metrics and optimization criteria for dimension reduction projections of high-dimensional data. Complex, high-dimensional data is ubiquitous across many scientific disciplines, including machine learning, biology, and the social sciences. One of the primary methods of visualizing these datasets is with two-dimensional scatter plots that visually capture some properties of the data. Because visually determining the accuracy of these plots is challenging, researchers often use quality metrics to measure the projection\u2019s accuracy or faithfulness to the full data. One of the most commonly employed metrics, normalized stress, is sensitive to uniform scaling (stretching, shrinking) of the projection, despite this act not meaningfully changing anything about the projection. We investigate the effect of scaling on stress and other distance-based quality metrics analytically and empirically by showing just how much the values change and how this affects dimension reduction technique evaluations. We introduce a simple technique to make normalized stress scale-invariant and show that it accurately captures expected behavior on a small benchmark.","accessible_pdf":false,"authors":[{"affiliations":["University of Arizona, Tucson, United States"],"email":"ksmelser@arizona.edu","is_corresponding":false,"name":"Kiran Smelser"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"jacobmiller1@arizona.edu","is_corresponding":true,"name":"Jacob Miller"},{"affiliations":["University of Arizona, Tucson, United States"],"email":"stephen.kobourov@tum.de","is_corresponding":false,"name":"Stephen Kobourov"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jacob Miller"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1008","time_end":"","time_stamp":"","time_start":"","title":"Normalized Stress is Not Normalized: How to Interpret Stress Correctly","uid":"w-beliv-1008","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The cognitive processes involved in understanding and misunderstanding visualizations have not yet been fully clarified, even for well-studied designs, such as bar charts. In particular, little is known about whether viewers can improve their learning processes by getting better insight into their own cognition. This paper describes a simple method to measure the role of such metacognitive understanding when learning to read bar charts. For this purpose, we conducted an experiment in which we investigated bar chart learning repeatedly, and tested how learning over trials was effected by metacognitive understanding. We integrate the findings into a model of metacognitive processing of visualizations, and discuss implications for the design of visualizations.","accessible_pdf":false,"authors":[{"affiliations":["Heidelberg University, Heidelberg, Germany"],"email":"antonia.schlieder@t-online.de","is_corresponding":true,"name":"Antonia Schlieder"},{"affiliations":["Heidelberg University, Heidelberg, Germany"],"email":"jan.rummel@psychologie.uni-heidelberg.de","is_corresponding":false,"name":"Jan Rummel"},{"affiliations":["Ruprecht-Karls-Universit\u00e4t Heidelberg, Heidelberg, Germany"],"email":"palbers@mathi.uni-heidelberg.de","is_corresponding":false,"name":"Peter Albers"},{"affiliations":["Heidelberg University, Heidelberg, Germany"],"email":"sadlo@uni-heidelberg.de","is_corresponding":false,"name":"Filip Sadlo"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Antonia Schlieder"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1009","time_end":"","time_stamp":"","time_start":"","title":"The Role of Metacognition in Understanding Deceptive Bar Charts","uid":"w-beliv-1009","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Empirical studies in visualisation often compare visual representations to identify the most effective visualisation for a particular visual judgement or decision making task. However, the effectiveness of a visualisation may be intrinsically related to, and difficult to distinguish from, factors such as visualisation literacy. Complicating matters further, visualisation literacy itself is not a singular intrinsic quality, but can be a result of several distinct challenges that a viewer encounters when performing a task with a visualisation. In this paper, we describe how such challenges apply to experiments that we use to evaluate visualisations, and discuss a set of considerations for designing studies in the future. Finally, we argue that aspects of the study design which are often neglected or overlooked (such as the onboarding of participants, tutorials, training etc.) can have a big role in the results of a study and can potentially impact the conclusions that the researchers can draw from the study.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"abhraneel@u.northwestern.edu","is_corresponding":true,"name":"Abhraneel Sarma"},{"affiliations":["Northwestern University, Evanston, United States"],"email":"shenglong@u.northwestern.edu","is_corresponding":false,"name":"Sheng Long"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Abhraneel Sarma"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1015","time_end":"","time_stamp":"","time_start":"","title":"Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design","uid":"w-beliv-1015","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This position paper critically examines the graphical inference framework for evaluating visualizations using the lineup task. We present a re-analysis of lineup task data using signal detection theory, applying four Bayesian non-linear models to investigate whether color ramps with more color name variation increase false discoveries. Our study utilizes data from Reda and Szafir\u2019s previous work [20], corroborating their findings while providing additional insights into sensitivity and bias differences across colormaps and individuals. We suggest improvements to lineup study designs and explore the connections between graphical inference, signal detection theory, and statistical decision theory. Our work contributes a more perceptually grounded approach for assessing visualization effectiveness and offers a path forward for better aligning graphical inference methods with human cognition. The results have implications for the development and evaluation of visualizations, particularly for exploratory data analysis scenarios. Supplementary materials are available at https://osf.io/xd5cj/.","accessible_pdf":false,"authors":[{"affiliations":["Northwestern University, Evanston, United States"],"email":"shenglong@u.northwestern.edu","is_corresponding":true,"name":"Sheng Long"},{"affiliations":["Northwestern University, Chicago, United States"],"email":"matthew.kay@gmail.com","is_corresponding":false,"name":"Matthew Kay"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sheng Long"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1016","time_end":"","time_stamp":"","time_start":"","title":"Old Wine in a New Bottle? Analysis of Visual Lineups with Signal Detection Theory","uid":"w-beliv-1016","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Visualising personal experiences is often described as a means for self-reflection, shaping one\u2019s identity, and sharing it with others. In policymaking, personal narratives are regarded as an important source of intelligence to shape public discourse and policy. Therefore, policymakers are interested in the interplay between individual-level experiences and macro-political processes that play into shaping these experiences. In this context, visualisation is regarded as a medium for advocacy, creating a power balance between individuals and the power structures that influence their health and well-being. In this paper, we offer a politically-framed reflection on how visualisation creators define lived experience data, and what design choices they make for visualising them. We identify data characteristics and design choices that enable visualisation authors and consumers to engage in a process of narrative co-construction, while navigating structural forms of inequality. Our political framing is driven by ideas of master and alternative narratives from Diversity Science, in which authors and narrators engage in a process of negotiation with power structures to either maintain or challenge the status quo.","accessible_pdf":false,"authors":[{"affiliations":["City, University of London, London, United Kingdom"],"email":"mai.elshehaly@city.ac.uk","is_corresponding":true,"name":"Mai Elshehaly"},{"affiliations":["City, University of London, London, United Kingdom"],"email":"mirela.reljan-delaney@city.ac.uk","is_corresponding":false,"name":"Mirela Reljan-Delaney"},{"affiliations":["City, University of London, London, United Kingdom"],"email":"j.dykes@city.ac.uk","is_corresponding":false,"name":"Jason Dykes"},{"affiliations":["City, University of London, London, United Kingdom"],"email":"a.slingsby@city.ac.uk","is_corresponding":false,"name":"Aidan Slingsby"},{"affiliations":["City, University of London, London, United Kingdom"],"email":"j.d.wood@city.ac.uk","is_corresponding":false,"name":"Jo Wood"},{"affiliations":["University of Edinburgh, Edinburgh, United Kingdom"],"email":"sam.spiegel@ed.ac.uk","is_corresponding":false,"name":"Sam Spiegel"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mai Elshehaly"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1018","time_end":"","time_stamp":"","time_start":"","title":"Visualising Lived Experience: Learning from a Master and Alternative Narrative Framing","uid":"w-beliv-1018","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The generation and presentation of counterfactual explanations (CFEs) are a commonly used, model-agnostic approach to helping end-users reason about the validity of AI/ML model outputs. By demonstrating how sensitive the model's outputs are to minor variations, CFEs are thought to improve understanding of the model's behavior, identify potential biases, and increase the transparency of 'black box models'. Here, we examine how CFEs support a diverse audience, both with and without technical expertise, to understand the results of an LLM-informed sentiment analysis. We conducted a preliminary pilot study with ten individuals with varied expertise from ranging NLP, ML, and ethics, to specific domains. All individuals were actively using or working with AI/ML technology as part of their daily jobs. Through semi-structured interviews grounded in a set of concrete examples, we examined how CFEs influence participants' perceptions of the model's correctness, fairness, and trustworthiness, and how visualization of CFEs specifically influences those perceptions. We also surface how participants wrestle with their internal definitions of `explainability', relative to what CFEs present, their cultures, and backgrounds, in addition to the, much more widely studied phenomena, of comparing their baseline expectations of the model's performance. Compared to prior research, our findings highlight the sociotechnical frictions that CFEs surface but do not necessarily remedy. We conclude with the design implications of developing transparent AI/ML visualization systems for more general tasks.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"amcrisan@uwaterloo.ca","is_corresponding":true,"name":"Anamaria Crisan"},{"affiliations":["Tableau Software, Seattle, United States"],"email":"nbutters@salesforce.com","is_corresponding":false,"name":"Nathan Butters"},{"affiliations":["Tableau Software, Seattle, United States"],"email":"zoezoezoe.cc@gmail.com","is_corresponding":false,"name":"Zoe Zoe"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anamaria Crisan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1020","time_end":"","time_stamp":"","time_start":"","title":"Exploring Subjective Notions of Explainability through Counterfactual Visualization of Sentiment Analysis","uid":"w-beliv-1020","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The replication crisis has spawned a revolution in scientific methods, aimed at increasing the transparency, robustness, and reliability of scientific outcomes. In particular, the practice of preregistering study designs has shown important advantages. Preregistration can help limit questionable research practices, as well as increase the success rate of study replications. Many fields have now adopted preregistration as a default expectation for published studies. In 2022, we set up a panel ``Merits and Limits of User Study Preregistration'' with the overall goal of explaining the concept of preregistration to a wide VIS audience and discussing its suitability for visualization research. We report on the arguments and discussion of this panel in the hope that it can benefit the visualization community at large. All materials and a copy of this paper are available on our OSF repository at https://osf.io/wes57/.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":true,"name":"Lonni Besan\u00e7on"},{"affiliations":["University of Virginia, Charlottesville, United States"],"email":"nosek@virginia.edu","is_corresponding":false,"name":"Brian Nosek"},{"affiliations":["Tilburg University, Tilburg, Netherlands"],"email":"t.l.haven@tilburguniversity.edu","is_corresponding":false,"name":"Tamarinde Haven"},{"affiliations":["Link\u00f6ping University, N\u00f6rrkoping, Sweden"],"email":"miriah.meyer@liu.se","is_corresponding":false,"name":"Miriah Meyer"},{"affiliations":["Northeastern University, Boston, United States"],"email":"c.dunne@northeastern.edu","is_corresponding":false,"name":"Cody Dunne"},{"affiliations":["Luxembourg Institute of Science and Technology, Belvaux, Luxembourg"],"email":"mohammad.ghoniem@gmail.com","is_corresponding":false,"name":"Mohammad Ghoniem"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lonni Besan\u00e7on"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1021","time_end":"","time_stamp":"","time_start":"","title":"Merits and Limits of Preregistration for Visualization Research","uid":"w-beliv-1021","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Despite 30+ years of academic practice, visualization still lacks an explanation of how and why it functions in complex organizations performing knowledge work. This survey examines the intersection of organizational studies and visualization design, highlighting the concept of \\textit{boundary objects}, which visualization practitioners are adopting in both CSCW (computer-supported collaborative work) and HCI. This paper also collects the prior literature on boundary objects in visualization design studies, a methodology which maps closely to action research in organizations, and addresses the same problems of `knowing in common'. Process artifacts generated by visualization design studies function as boundary objects in their own right, facilitating knowledge transfer across disciplines within an organization. Currently, visualization faces the challenge of explaining how sense-making functions across domains, through visualization artifacts, and how these support decision-making. As a deeply interdisciplinary field, visualization should adopt the theory of boundary objects in order to embrace its plurality of domains and systems, whilst empowering its practitioners with a unified process-based theory.","accessible_pdf":false,"authors":[{"affiliations":["UC Santa Cruz, Santa Cruz, United States"],"email":"jtotto@ucsc.edu","is_corresponding":true,"name":"Jasmine Tan Otto"},{"affiliations":["California Institute of Technology, Pasadena, United States"],"email":"sd@scottdavidoff.com","is_corresponding":false,"name":"Scott Davidoff"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jasmine Tan Otto"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1026","time_end":"","time_stamp":"","time_start":"","title":"Visualization Artifacts are Boundary Objects","uid":"w-beliv-1026","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Foundation models for vision and language are the basis of AI applications across numerous sectors of society. The success of these models stems from their ability to mimic human capabilities, namely visual perception in vision models, and analytical reasoning in large language models. As visual perception and analysis are fundamental to data visualization, in this position paper we ask: how can we harness foundation models to advance progress in visualization design? Specifically, how can multimodal foundation models (MFMs) guide visualization design through visual perception? We approach these questions by investigating the effectiveness of MFMs for perceiving visualization, and formalizing the overall visualization design and optimization space. Specifically, we think that MFMs can best be viewed as judges, equipped with the ability to criticize visualizations, and provide us with actions on how to improve a visualization. We provide a deeper characterization for text-to-image generative models, and multi-modal large language models, organized by what these models provide as output, and how to utilize the output for guiding design decisions. We hope that our perspective can inspire researchers in visualization on how to approach MFMs for visualization design.","accessible_pdf":false,"authors":[{"affiliations":["Vanderbilt University, Nashville, United States"],"email":"matthew.berger@vanderbilt.edu","is_corresponding":true,"name":"Matthew Berger"},{"affiliations":["Lawrence Livermore National Laboratory , Livermore, United States"],"email":"shusenl@sci.utah.edu","is_corresponding":false,"name":"Shusen Liu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Matthew Berger"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1027","time_end":"","time_stamp":"","time_start":"","title":"[position paper] The Visualization JUDGE : Can Multimodal Foundation Models Guide Visualization Design Through Visual Perception?","uid":"w-beliv-1027","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Submissions of original research that use Large Language Models (LLMs) or that study their behavior, suddenly account for a sizable portion of works submitted and accepted to visualization (VIS) conferences and similar venues in human-computer interaction (HCI). In this brief position paper, I argue that reviewers are relatively unprepared to evaluate these submissions effectively. To support this conjecture I reflect on my experience serving on four program committees for VIS and HCI conferences over the past year. I will describe common reviewer critiques that I observed and highlight how these critiques influence the review process. I also raise some concerns about these critiques that could limit applied LLM research to all but the best-resourced labs. While I conclude with suggestions for evaluating research contributions that incorporate LLMs, the ultimate goal of this position paper is to simulate a discussion on the review process and its challenges.","accessible_pdf":false,"authors":[{"affiliations":["Tableau Research, Seattle, United States"],"email":"amcrisan@uwaterloo.ca","is_corresponding":true,"name":"Anamaria Crisan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Anamaria Crisan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1033","time_end":"","time_stamp":"","time_start":"","title":"We Don't Know How to Assess LLM Contributions in VIS/HCI","uid":"w-beliv-1033","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper revisits the role of quantitative and qualitative methods in visualization research in the context of advancements in artificial intelligence (AI). The focus is on how we can bridge between the different methods in an integrated process of analyzing user study data. To this end, a process model of - potentially iterated - semantic enrichment of data is proposed. This joint perspective of data and semantics facilitates the integration of quantitative and qualitative methods. The model is motivated by examples of prior work, especially in the area of eye tracking user studies and coding data-rich observations. Finally, there is a discussion of open issues and research opportunities in the interplay between AI and qualitative and quantitative methods for visualization research.","accessible_pdf":false,"authors":[{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":true,"name":"Daniel Weiskopf"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Daniel Weiskopf"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1034","time_end":"","time_stamp":"","time_start":"","title":"Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in the Light of Advanced AI","uid":"w-beliv-1034","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Complexity is often seen as a inherent negative in information design, with the job of the designer being to reduce or eliminate complexity, and with principles like Tufte\u2019s \u201cdata-ink ratio\u201d or \u201cchartjunk\u201d to operationalize minimalism and simplicity in visualizations. However, in this position paper, we call for a more expansive view of complexity as a design material, like color or texture or shape: an element of information design that can be used in many ways, many of which are beneficial to the goals of using data to understand the world around us. We describe complexity as a phenomenon that occurs not just in visual design but in every aspect of the sensemaking process, from data collection to interpretation. For each of these stages, we present examples of ways that these various forms of complexity can be used (or abused) in visualization design. We ultimately call on the visualization community to build a more nuanced view of complexity, to look for places to usefully integrate complexity in multiple stages of the design process, and, even when the goal is to reduce complexity, to look for the non-visual forms of complexity that may have otherwise been overlooked.","accessible_pdf":false,"authors":[{"affiliations":["University for Continuing Education Krems, Krems, Austria"],"email":"florian.windhager@donau-uni.ac.at","is_corresponding":true,"name":"Florian Windhager"},{"affiliations":["King's College London, London, United Kingdom"],"email":"alfie.abdulrahman@gmail.com","is_corresponding":false,"name":"Alfie Abdul-Rahman"},{"affiliations":["University of Applied Sciences Potsdam, Potsdam, Germany"],"email":"mark-jan.bludau@fh-potsdam.de","is_corresponding":false,"name":"Mark-Jan Bludau"},{"affiliations":["Warwick Institute for the Science of Cities, Coventry, United Kingdom"],"email":"nicole.hengesbach@posteo.de","is_corresponding":false,"name":"Nicole Hengesbach"},{"affiliations":["University of Amsterdam, Amsterdam, Netherlands"],"email":"h.lamqaddam@uva.nl","is_corresponding":false,"name":"Houda Lamqaddam"},{"affiliations":["OCAD University, Toronto, Canada"],"email":"meirelles.isabel@gmail.com","is_corresponding":false,"name":"Isabel Meirelles"},{"affiliations":["TU Eindhoven, Eindhoven, Netherlands"],"email":"b.speckmann@tue.nl","is_corresponding":false,"name":"Bettina Speckmann"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Florian Windhager"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1035","time_end":"","time_stamp":"","time_start":"","title":"Complexity as Design Material","uid":"w-beliv-1035","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Qualitative data analysis is widely adopted for user evaluation, not only in the Visualisation community but also related communities, such as Human-Computer Interaction and Augmented and Virtual Reality. However, the data analysis process is often not clearly described and the results are often simply listed in the form of interesting quotes from or summaries of quotes that were uttered by study participants. This position paper proposes an early concept for the use of a researcher as an \u201cAdvocatus Diaboli\u201d, or devil\u2019s advocate, to try to disprove the results of the data analysis by looking for quotes that contradict the findings or leading questions and task designs. Whatever this devil\u2019s advocate finds can then be used to reiterate on the findings and the analysis process to form more suitable theories. On the other hand, researchers are enabled to clarify why they did not include this in their theory. This process could increase transparency in the qualitative data analysis process and increase trust in these findings, while being mindful of the necessary resources.","accessible_pdf":false,"authors":[{"affiliations":["University of Applied Sciences Upper Austria, Hagenberg, Austria"],"email":"judith.friedl-knirsch@fh-hagenberg.at","is_corresponding":true,"name":"Judith Friedl-Knirsch"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Judith Friedl-Knirsch"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-beliv0","slot_id":"w-beliv-1037","time_end":"","time_stamp":"","time_start":"","title":"Position paper: Proposing the use of an \u201cAdvocatus Diaboli\u201d as a pragmatic approach to improve transparency in qualitative data analysis and reporting","uid":"w-beliv-1037","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"BELIV","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"w-biomedvis":{"event":"Bio+Med+Vis Workshop","event_description":"","event_prefix":"w-biomedvis","event_type":"workshop","event_url":"","long_name":"Bio+Med+Vis Workshop","organizers":[],"sessions":[]},"w-eduvis":{"event":"EduVis: Workshop on Visualization Education, Literacy, and Activities","event_description":"","event_prefix":"w-eduvis","event_type":"workshop","event_url":"","long_name":"EduVis: Workshop on Visualization Education, Literacy, and Activities","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"w-eduvis","ff_link":"","session_id":"w-eduvis0","session_image":"w-eduvis0.png","time_end":"","time_slots":[{"abstract":"Visualizations are a critical medium not only for telling stories, but for fostering exploration. But while there are countless examples how to use visualizations for\u201cstorytelling with data,\u201d there are few guidelines on how to design visualizations for public exploration. This educator report draws on decades of work in science museums, a public context focused on designing interactive experiences for exploration, to provide evidence-based guidelines for designing exploratory visualizations. Recent studies on interactive visualizations in museums are contextualized within a larger body of museum research on designs that support exploratory learning in interactive exhibits. Synthesizing these studies highlights that to create successful exploratory visualizations, designers can apply long-standing guidelines from exhibit design but need to provide more aids for interpretation.","accessible_pdf":false,"authors":[{"affiliations":["Science Communication Lab, Berkeley, United States","University of California, San Francisco, San Francisco, United States"],"email":"jafrazier@gmail.com","is_corresponding":true,"name":"Jennifer Frazier"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jennifer Frazier"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1007","time_end":"","time_stamp":"","time_start":"","title":"Beyond storytelling with data: Guidelines for designing exploratory visualizations","uid":"w-eduvis-1007","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"With the increasing amount of data globally, analyzing and visualizing data are becoming essential skills across various professions. It is important to equip university students with these essential data skills. To learn, design, and develop data visualization, students need knowledge of programming and data science topics. Many university programs lack dedicated data science courses for undergraduate students, making it important to introduce these concepts through integrated courses. However, combining data science and data visualization into one course can be challenging due to the time constraints and the heavy load of learning. In this paper, we discuss the development of teaching data science and data visualization together in one course and share the results of the post-course evaluation survey. From the survey's results, we identified four challenges, including difficulty in learning multiple tools and diverse data science topics, varying proficiency levels with tools and libraries, and selecting and cleaning datasets. We also distilled five opportunities for developing a successful data science and visualization course. These opportunities include clarifying the course structure, emphasizing visualization literacy early in the course, updating the course content according to student needs, using large real-world datasets, learning from industry professionals, and promoting collaboration among students.","accessible_pdf":false,"authors":[{"affiliations":["Carleton University, Ottawa, Canada"],"email":"shrihariniramesh@cmail.carleton.ca","is_corresponding":true,"name":"Shri Harini Ramesh"},{"affiliations":["Carleton University, Ottawa, Canada","Bruyere Research Institute, Ottawa, Canada"],"email":"fateme.rajabiyazdi@carleton.ca","is_corresponding":false,"name":"Fateme Rajabiyazdi"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shri Harini Ramesh"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1008","time_end":"","time_stamp":"","time_start":"","title":"Challenges and Opportunities of Teaching Data Visualization Together with Data Science","uid":"w-eduvis-1008","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This report examines the implementation of the Solution Framework in a social impact project facilitated by VizForSocialGood. It outlines the data visualization process, detailing each stage and offering practical insights. The framework's application demonstrates its effectiveness in enhancing project quality, efficiency, and collaboration, making it a valuable tool for educational and professional environments.","accessible_pdf":false,"authors":[{"affiliations":["Independent Information Designer, Medellin, Colombia","Independent Information Designer, Medellin, Colombia"],"email":"munozdataviz@gmail.com","is_corresponding":true,"name":"Victor Mu\u00f1oz"},{"affiliations":["Corporate Information Designer, Arlington Hts, United States","Corporate Information Designer, Arlington Hts, United States"],"email":"hellokevinford@gmail.com","is_corresponding":false,"name":"Kevin Ford"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Victor Mu\u00f1oz"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1010","time_end":"","time_stamp":"","time_start":"","title":"Implementing the Solution Framework in a Social Impact Project","uid":"w-eduvis-1010","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Academic advising can positively impact struggling students' success. We developed AdVizor, a data-driven learning analytics tool for academic risk prediction for advisors. Our system is equipped with a random forest model for grade prediction probabilities uses a visualization dashboard to allows advisors to interpret model predictions. We evaluated our system in mock advising sessions with academic advisors and undergraduate students at our university. Results show that the system can easily integrate into the existing advising workflow, and visualizations of model outputs can be learned through short training sessions. AdVizor supports and complements the existing expertise of the advisor while helping to facilitate advisor-student discussion and analysis. Advisors found the system assisted them in guiding student course selection for the upcoming semester. It allowed them to guide students to prioritize the most critical and impactful courses. Both advisors and students perceived the system positively and were interested in using the system in the future. Our results encourage the development of intelligent advising systems in higher education, catered for advisors.","accessible_pdf":false,"authors":[{"affiliations":["Ontario Tech University, Oshawa, Canada"],"email":"riley.weagant@ontariotechu.net","is_corresponding":false,"name":"Riley Weagant"},{"affiliations":["Ontario Tech University, Oshawa, Canada"],"email":"zixin.zhao@ontariotechu.net","is_corresponding":true,"name":"Zixin Zhao"},{"affiliations":["Ontario Tech University, Oshawa, Canada"],"email":"abradley@uncharted.software","is_corresponding":false,"name":"Adam Badley"},{"affiliations":["Ontario Tech University, Oshawa, Canada"],"email":"christopher.collins@ontariotechu.ca","is_corresponding":false,"name":"Christopher Collins"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Zixin Zhao"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1013","time_end":"","time_stamp":"","time_start":"","title":"AdVizor: Using Visual Explanations to Guide Data-Driven Student Advising","uid":"w-eduvis-1013","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The integration of visualization in computing education has emerged as a promising strategy to enhance student understanding and engagement in complex computing concepts. Motivated by the need to explore effective teaching methods, this research systematically reviews the applications of visualization tools in computing education, aiming to identify gaps and opportunities for future research. We conducted a systematic literature review using papers from Semantic Scholar and Web of Science, and using a refined set of keywords to gather relevant studies. Our search yielded 288 results, which were systematically filtered to include 90 papers. Data extraction focused on publication details, research methods, key findings, future research suggestions, and research categories. Our review identified a diverse range of visualization tools and techniques used across different areas of computing education, including algorithms, programming, online learning, and problem-solving. The findings highlight the effectiveness of these tools in improving student engagement, understanding, and learning outcomes. However, there is a need for rigorous evaluations and the development of new models tailored to specific learning difficulties. By identifying effective visualization techniques and areas for further investigation, this review encourages the continued development and integration of visual tools in computing education to support the advancement of teaching methodologies","accessible_pdf":false,"authors":[{"affiliations":["University of Toronto, Toronto, Canada"],"email":"naaz.sibia@utoronto.ca","is_corresponding":true,"name":"Naaz Sibia"},{"affiliations":["University of Toronto Mississauga, Mississauga, Canada"],"email":"michael.liut@utoronto.ca","is_corresponding":false,"name":"Michael Liut"},{"affiliations":["University of Toronto, Toronto, Canada"],"email":"cnobre@cs.toronto.edu","is_corresponding":false,"name":"Carolina Nobre"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Naaz Sibia"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1015","time_end":"","time_stamp":"","time_start":"","title":"Exploring the Role of Visualization in Enhancing Computing Education: A Systematic Literature Review","uid":"w-eduvis-1015","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The digitalisation of organisations has transformed the way organisations view data. All employees are expected to be data literate and managers are expected to make data-driven decisions [1]. The ability to analyse and visualize the data is a crucial skill set expected from every decision-maker. To help managers develop the skill of data visualization, business schools across the world offer courses in data visualization. From an educator\u2019s perspective, one key decision that he/she must take while designing a visualization course for management students is the software tool to use in the course. Existing literature on data visualization in the scientific community is primarily focused on tools used by researchers or computer scientists ([3], [4]). In [5] the authors evaluate the landscape of commercially available visual analytics systems. In business-related publications like Harvard Business Review, the focus is more on selecting the right chart or on designing effective visualization ([6], [7]). There is a lack of literature to guide educators in teaching visualization to management students. This article attempts to guide educators teaching visualization to management students on how to select the appropriate software tool for their course.","accessible_pdf":false,"authors":[{"affiliations":["Indian institute of management indore, Indore, India"],"email":"sanjogr@iimidr.ac.in","is_corresponding":true,"name":"Sanjog Ray"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sanjog Ray"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1017","time_end":"","time_stamp":"","time_start":"","title":"Visualization Software: How to Select the Right Software for Teaching Visualization.","uid":"w-eduvis-1017","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In this article, we discuss an experience with design and situated learning in the Creative Data Visualization course, part of the Visual Communication Design undergraduate program at the Federal University of Rio de Janeiro, a free, public Brazilian university that, thanks to affirmative action policies, has become more inclusive over the years. We begin with a brief introduction to the terms Situated Knowledge, coined by Donna Haraway, Situated Design, based on the former concept, and Situated Learning. We then examine the similarities and differences between these notions and the term Situated Visualization to present a model for the concept of Situated Learning in Information Visualization. Following this foundation, we describe the applied methodology, emphasizing the importance of integrating real-world contexts into students\u2019 projects. As a case study, we present three student projects produced as final assignments for the course. Through this article, we aim to underscore the articulation of situated design concepts in information visualization activities and contribute to teaching and learning practices in this field, particularly within the Global South.","accessible_pdf":false,"authors":[{"affiliations":["Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil"],"email":"doriskos@eba.ufrj.br","is_corresponding":true,"name":"Doris Kosminsky"},{"affiliations":["Federal University of Rio de Janeiro, Rio de Janeiro, Brazil"],"email":"renata.perim@ufrj.br","is_corresponding":false,"name":"Renata Perim Lopes"},{"affiliations":["UFRJ, RJ, Brazil","IBGE, RJ, Brazil"],"email":"regina.reznik@ufrj.br","is_corresponding":false,"name":"Regina Reznik"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Doris Kosminsky"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1018","time_end":"","time_stamp":"","time_start":"","title":"Teaching Information Visualization through Situated Design: Case Studies from the Classroom","uid":"w-eduvis-1018","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The integration of data visualization in journalism has catalyzed the growth of data storytelling in recent years. Today, it is increasingly common for journalism schools to incorporate data visualization into their curricula. However, the approach to teaching data visualization in journalism schools can diverge significantly from that in computer science or design schools, influenced by the varied backgrounds of students and the distinct value systems inherent to these disciplines. This paper reviews my experience and reflections on teaching data visualization in a journalism school. First, I discuss the prominent characteristics of journalism education that pose challenges for course design and teaching. Then, I share firsthand teaching experiences related to each characteristic and recommend approaches for effective teaching.","accessible_pdf":false,"authors":[{"affiliations":["Fudan University, Shanghai, China"],"email":"xingyulan96@gmail.com","is_corresponding":true,"name":"Xingyu Lan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xingyu Lan"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1019","time_end":"","time_stamp":"","time_start":"","title":"Reflections on Teaching Data Visualization at the Journalism School","uid":"w-eduvis-1019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In this paper, we discuss our experiences advancing a professional-oriented graduate program in Cartography & GIScience at the University of Wisconsin-Madison to account for fundamental shifts in conceptual framings, rapidly evolving mapping technologies, and diverse student needs. We focus our attention on considerations for the cartography curriculum given its relevance to (geo)visualization education and map literacy. We reflect on challenges associated with, and lessons learned from, developing a comprehensive and cohesive cartography curriculum across in-person and online learning modalities for a wide range of professional student audiences.","accessible_pdf":false,"authors":[{"affiliations":["University of Wisconsin-Madison, Madison, United States"],"email":"jknelson3@wisc.edu","is_corresponding":true,"name":"Jonathan Nelson"},{"affiliations":["University of Wisconsin-Madison, Madison, United States"],"email":"limpisathian@wisc.edu","is_corresponding":false,"name":"P. William Limpisathian"},{"affiliations":["University of Wisconsin-Madison, Madison, United States"],"email":"reroth@wisc.edu","is_corresponding":false,"name":"Robert Roth"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jonathan Nelson"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1020","time_end":"","time_stamp":"","time_start":"","title":"Developing a Robust Cartography Curriculum to Train the Professional Cartographer","uid":"w-eduvis-1020","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"For over half a century, science centers have been key in communicating science, aiming to increase interest and curiosity in STEM, and promote lifelong learning. Science centers integrate interactive technologies like dome displays, touch tables, VR and AR for immersive learning. Visitors can explore complex phenomena, such as conducting a virtual autopsy. Also, the shift towards digitally interactive exhibits has expanded science centers beyond physical locations to virtual spaces, extending their reach into classrooms. Our investigation revealed several key factors for impactful school visits involving interactive data visualization such as full-dome movies, provide unique perspectives about vast and microscopic phenomena. Hands-on discovery allows pupils to manipulate and investigate data, leading to deeper engagement. Collaborative interaction fosters active learning through group participation. Additionally, clear curriculum connections ensure that visits are pedagogically meaningful. We propose a three-stage model for school visits. The \"Experience\" stage involves immersive visual experiences to spark interest. The \"Engagement\" stage builds on this by providing hands-on interaction with data visualization exhibits. The \"Applicate\" stage offers opportunities to apply and create using data visualization. A future goal of the model is to broaden STEM reach, enabling pupils to benefit from data visualization experiences even if they cannot visit centers.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping university, Norrk\u00f6ping, Sweden"],"email":"andreas.c.goransson@liu.se","is_corresponding":true,"name":"Andreas G\u00f6ransson"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Andreas G\u00f6ransson"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1026","time_end":"","time_stamp":"","time_start":"","title":"What makes school visits to digital science centers successful?","uid":"w-eduvis-1026","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Parallel coordinate plots (PCPs) are gaining popularity in data exploration, statistical analysis, predictive analysis along with for data-driven storytelling. In this paper, we present the results of a post-hoc analysis of a dataset from a PCP literacy intervention to identify barriers to PCP literacy. We analyzed question responses and inductively identified barriers to PCP literacy. We performed group coding on each individual response and identified new barriers to PCP literacy. Based on our analysis, we present a extended and enhanced list of barriers to PCP literacy. Our findings have implications towards educational interventions targeting PCP literacy and can provide an approach for students to learn about PCPs through active learning.","accessible_pdf":false,"authors":[{"affiliations":["University of San Francisco, San Francisco, United States"],"email":"csrinivas2@dons.usfca.edu","is_corresponding":false,"name":"Chandana Srinivas"},{"affiliations":["Cukurova University, Adana, Turkey"],"email":"elifemelfirat@gmail.com","is_corresponding":false,"name":"Elif E. Firat"},{"affiliations":["University of Nottingham, Nottingham, United Kingdom"],"email":"robert.laramee@nottingham.ac.uk","is_corresponding":false,"name":"Robert S. Laramee"},{"affiliations":["University of San Francisco, San Francisco, United States"],"email":"apjoshi@usfca.edu","is_corresponding":true,"name":"Alark Joshi"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Alark Joshi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1027","time_end":"","time_stamp":"","time_start":"","title":"An Inductive Approach for Identification of Barriers to PCP Literacy","uid":"w-eduvis-1027","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"With the decreasing cost of consumer display technologies making it easier for universities to have larger displays in classrooms, and the ubiquitous use of online tools such as collaborative whiteboards for remote learning during the COVID-19 pandemic, combining the two can be useful in higher education. This is especially true in visually intensive classes, such as data visualization courses, that can benefit from additional \"space to teach,\" coined after the \"space to think\" sense-making idiom. In this paper, we reflect on our approach to using SAGE3, a collaborative whiteboard with advanced features, in higher education to teach visually intensive classes, provide examples of activities from our own visually-intensive courses, and present student feedback. We gather our observations into usage patterns for using content-rich canvases in education.","accessible_pdf":false,"authors":[{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"jessemh@vt.edu","is_corresponding":true,"name":"Jesse Harden"},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"nuritk@hawaii.edu","is_corresponding":false,"name":"Nurit Kirshenbaum"},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"tabalbar@hawaii.edu","is_corresponding":false,"name":"Roderick S Tabalba Jr."},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"rtheriot@hawaii.edu","is_corresponding":false,"name":"Ryan Theriot"},{"affiliations":["The University of Hawai'i at M\u0101noa, Honolulu, United States"],"email":"mlr2010@hawaii.edu","is_corresponding":false,"name":"Michael L. Rogers"},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"mahdi@hawaii.edu","is_corresponding":false,"name":"Mahdi Belcaid"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"},{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"renambot@uic.edu","is_corresponding":false,"name":"Luc Renambot"},{"affiliations":["University of Illinois at Chicago, Chicago, United States"],"email":"llong4@uic.edu","is_corresponding":false,"name":"Lance Long"},{"affiliations":["University of Illinois Chicago, Chicago, United States"],"email":"ajohnson@uic.edu","is_corresponding":false,"name":"Andrew E Johnson"},{"affiliations":["University of Hawaii at Manoa, Honolulu, United States"],"email":"leighj@hawaii.edu","is_corresponding":false,"name":"Jason Leigh"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jesse Harden"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1028","time_end":"","time_stamp":"","time_start":"","title":"Space to Teach: Content-Rich Canvases for Visually-Intensive Education","uid":"w-eduvis-1028","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data-art blends visualisation, data science, and artistic expression. It allows people to transform information and data into exciting and interesting visual narratives. Hosting a public data-art hands-on workshop enables participants to engage with data and learn fundamental visualisation techniques. However, being a public event, it presents a range of challenges. We outline our approach to organising and conducting a public workshop, that caters to a wide age range, from children to adults. We divide the tutorial into three sections, focusing on data, sketching skills and visualisation. We place emphasis on public engagement, and ensure that participants have fun while learning new skills.","accessible_pdf":false,"authors":[{"affiliations":["Bangor University, Bangor, United Kingdom"],"email":"j.c.roberts@bangor.ac.uk","is_corresponding":true,"name":"Jonathan C Roberts"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jonathan C Roberts"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1029","time_end":"","time_stamp":"","time_start":"","title":"Engaging Data-Art: Conducting a Public Hands-On Workshop","uid":"w-eduvis-1029","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We propose to leverage the recent development in Large Language Models, in combination to data visualization software and devices in science centers and schools in order to foster more personalized learning experiences. The main goal with our endeavour is to provide to pupils and visitors the same experience they would get with a professional facilitator when interacting with data visualizations of complex scientific phenomena. We describe the results from our early prototypes and the intended implementation and testing of our idea.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":true,"name":"Lonni Besan\u00e7on"},{"affiliations":["LiU Link\u00f6ping Universitet, Norrk\u00f6ping, Sweden"],"email":"mathis.brossier@liu.se","is_corresponding":false,"name":"Mathis Brossier"},{"affiliations":["King Abdullah University of Science and Technology, Thuwal, Saudi Arabia"],"email":"omar.mena@kaust.edu.sa","is_corresponding":false,"name":"Omar Mena"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"erik.sunden@liu.se","is_corresponding":false,"name":"Erik Sund\u00e9n"},{"affiliations":["Link\u00f6ping university, Norrk\u00f6ping, Sweden"],"email":"andreas.c.goransson@liu.se","is_corresponding":false,"name":"Andreas G\u00f6ransson"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"anders.ynnerman@liu.se","is_corresponding":false,"name":"Anders Ynnerman"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lonni Besan\u00e7on"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1030","time_end":"","time_stamp":"","time_start":"","title":"TellUs \u2013 Leveraging the power of LLMs with visualization to benefit science centers.","uid":"w-eduvis-1030","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In this reflective essay, we explore how educational science can be relevant for visualization research, addressing beneficial intersections between the two communities. While visualization has become integral to various areas, including education, our own ongoing collaboration has induced reflections and discussions we believe could benefit visualization research. In particular, we identify five key perspectives: surpassing traditional evaluation metrics by incorporating established educational measures; defining constructs based on existing learning and educational research frameworks; applying established cognitive theories to understand interpretation and interaction with visualizations; establishing uniform terminology across disciplines; and, fostering interdisciplinary convergence. We argue that by integrating educational research constructs, methodologies, and theories, visualization research can further pursue ecological validity and thereby improve the design and evaluation of visual tools. Our essay emphasizes the potential of intensified and systematic collaborations between educational scientists and visualization researchers to advance both fields, and in doing so craft visualization systems that support comprehension, retention, transfer, and critical thinking. We argue that this reflective essay serves as a first point of departure for initiating dialogue that, we hope, could help further connect educational science and visualization, by proposing future empirical studies that take advantage of interdisciplinary approaches of mutual gain to both communities.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"konrad.schonborn@liu.se","is_corresponding":false,"name":"Konrad J Sch\u00f6nborn"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"lonni.besancon@gmail.com","is_corresponding":true,"name":"Lonni Besan\u00e7on"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Lonni Besan\u00e7on"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-eduvis0","slot_id":"w-eduvis-1031","time_end":"","time_stamp":"","time_start":"","title":"What Can Educational Science Offer Visualization? A Reflective Essay","uid":"w-eduvis-1031","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"EduVis","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"w-energyvis":{"event":"EnergyVis 2024: 4th Workshop on Energy Data Visualization","event_description":"","event_prefix":"w-energyvis","event_type":"workshop","event_url":"","long_name":"EnergyVis 2024: 4th Workshop on Energy Data Visualization","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"w-energyvis","ff_link":"","session_id":"w-energyvis0","session_image":"w-energyvis0.png","time_end":"","time_slots":[{"abstract":"Weather can have a significant impact on the power grid. Heat and cold waves lead to increased energy use as customers cool or heat their space, while simultaneously hampering energy production as the environment deviates from ideal operating conditions. Extreme heat has previously melted power cables, while extreme cold can cause vital parts of the energy infrastructure to freeze. Utilities have reserves to compensate for the additional energy use, but in extreme cases which fall outside the forecast energy demand, the impact on the power grid can be severe. In this paper, we present an interactive tool to explore the relationship between weather and power outages. We demonstrate its use with the example of Winter Storm Uri\u2019s impact on Texas in February 2021.","accessible_pdf":false,"authors":[{"affiliations":["Institute of Computer Science, Leipzig University, Leipzig, Germany"],"email":"nsonga@informatik.uni-leipzig.de","is_corresponding":true,"name":"Baldwin Nsonga"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"andy.berres@gmail.com","is_corresponding":false,"name":"Andy S Berres"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"bobby.jeffers@nrel.gov","is_corresponding":false,"name":"Robert Jeffers"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"caitlyn.clark6@icloud.com","is_corresponding":false,"name":"Caitlyn Clark"},{"affiliations":["University of Kaiserslautern, Kaiserslautern, Germany"],"email":"hagen@cs.uni-kl.de","is_corresponding":false,"name":"Hans Hagen"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"scheuermann@informatik.uni-leipzig.de","is_corresponding":false,"name":"Gerik Scheuermann"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Baldwin Nsonga"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-1762","time_end":"","time_stamp":"","time_start":"","title":"Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri","uid":"w-energyvis-1762","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"With the growing penetration of inverter-based distributed energy resources and increased loads through electrification, power systems analyses are becoming more important and more complex. Moreover, these analyses increasingly involve the combination of interconnected energy domains with data that are spatially and temporally increasing in scale by orders of magnitude, surpassing the capabilities of many existing analysis and decision-support systems. We present the architectural design, development, and application of a high-resolution web-based visualization environment capable of cross-domain analysis of tens of millions of energy assets, focusing on scalability and performance. Our system supports the exploration, navigation, and analysis of large data from diverse domains such as electrical transmission and distribution systems, mobility and electric vehicle charging networks, communications networks, cyber assets, and other supporting infrastructure. We evaluate this system across multiple use cases, describing the capabilities and limitations of a web-based approach for high-resolution energy system visualizations.","accessible_pdf":false,"authors":[{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"graham.johnson@nrel.gov","is_corresponding":false,"name":"Graham Johnson"},{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"sam.molnar@nrel.gov","is_corresponding":false,"name":"Sam Molnar"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"nicholas.brunhart-lupo@nrel.gov","is_corresponding":false,"name":"Nicholas Brunhart-Lupo"},{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"kenny.gruchalla@nrel.gov","is_corresponding":true,"name":"Kenny Gruchalla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Kenny Gruchalla"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-2646","time_end":"","time_stamp":"","time_start":"","title":"Architecture for Web-Based Visualization of Large-Scale Energy Domains","uid":"w-energyvis-2646","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"In the pursuit of achieving net-zero greenhouse gas emissions by 2050, policymakers and researchers require sophisticated tools to explore and compare various climate transition scenarios. This paper introduces the Pathways Explorer, an innovative visualization tool designed to facilitate these comparisons by providing an interactive platform that allows users to select, view, and dissect multiple pathways towards sustainability. Developed in collaboration with the \u201cInstitut de l\u2019\u00e9nergie Trottier\u201d (IET), this tool leverages a technoeconomic optimization model to project the energy transformation needed under different constraints and assumptions. We detail the design process that guided the development of the Pathways Explorer, focusing on user-centered design challenges and requirements. A case study is presented to demonstrate how the tool has been utilized by stakeholders to make informed decisions, highlighting its impact and effectiveness. The Pathways Explorer not only enhances understanding of complex climate data but also supports strategic planning by providing clear, comparative visualizations of potential future scenarios.","accessible_pdf":false,"authors":[{"affiliations":["Kashika Studio, Montreal, Canada"],"email":"francois.levesque@polymtl.ca","is_corresponding":false,"name":"Fran\u00e7ois L\u00e9vesque"},{"affiliations":["Polytechnique Montreal, Montreal, Canada"],"email":"louis.beaumier@polymtl.ca","is_corresponding":false,"name":"Louis Beaumier"},{"affiliations":["Polytechnique Montreal, Montreal, Canada"],"email":"thomas.hurtut@polymtl.ca","is_corresponding":true,"name":"Thomas Hurtut"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Thomas Hurtut"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-2743","time_end":"","time_stamp":"","time_start":"","title":"Pathways Explorer: Interactive Visualization of Climate Transition Scenarios","uid":"w-energyvis-2743","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Methane (CH4) leakage monitoring is crucial for environmental protection and regulatory compliance, particularly in the oil and gas industries. Reducing CH4 emissions helps advance green energy by converting it into a valuable energy source through innovative capture technologies. A real-time continuous monitoring system (CMS) is necessary to detect fugitive and intermittent emissions and provide actionable insights. Integrating spatiotemporal data from satellites, airborne sensors, and ground sensors with inventory data and the weather research and forecasting (WRF) model creates a comprehensive dataset, making CMS feasible but posing significant challenges. These challenges include data alignment and fusion, managing heterogeneity, handling missing values, ensuring resolution integrity, and maintaining geometric and radiometric accuracy. This study outlines the procedure for methane leakage detection, addressing challenges at each step and offering solutions through machine learning and data analysis. It further details how visual analytics can be implemented to improve the effectiveness of the various aspects of emission monitoring.","accessible_pdf":false,"authors":[{"affiliations":["University of Oklahoma, Norman, United States"],"email":"parisa.masnadi@ou.edu","is_corresponding":true,"name":"Parisa Masnadi Khiabani"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"danala@ou.edu","is_corresponding":false,"name":"Gopichandh Danala"},{"affiliations":["University of Oklahoma, Norman, United States"],"email":"wolfgang.jentner@uni-konstanz.de","is_corresponding":false,"name":"Wolfgang Jentner"},{"affiliations":["University of Oklahoma, Oklahoma, United States"],"email":"ebert@ou.edu","is_corresponding":false,"name":"David Ebert"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Parisa Masnadi Khiabani"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-2845","time_end":"","time_stamp":"","time_start":"","title":"Challenges in Data Integration, Monitoring, and Exploration of Methane Emissions: The Role of Data Analysis and Visualization","uid":"w-energyvis-2845","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Transmission System Operators (TSO) often need to integrate multiple sources of information to make decisions in real time. In cases where a single power line goes offline, due to a natural event or scheduled outage, there typically will be a contingency plan that the TSO may utilize to mitigate the situation. In cases where two or more power lines go offline, this contingency plan is no longer valid, and they must re-prepare and reason about the network in real time. A key network property that must be balanced is loadability--the range of permissible voltage levels for a specific bus (or node), understood as a function of power and its active (P) and reactive (Q) components. Loadability provides information of how much more demand a specific node can handle, before system became unstable. To increase loadability, the TSO can potentially make control actions that raise or lower P or Q, which results in change the voltage levels required to be within permissible limits. While many methods exist to calculate loadability and represent loadability to end users, there has been little focus on tailoring loadability visualizations to the unique needs of TSOs. In this paper we involve operations domain experts in a human centered design process to prototype two new loadability visualizations for TSOs. We contribute a design paper that yields: (1) a working model of the operator's decision making process, (2) example artifacts of the two data visualization techniques, and (3) a critical qualitative expert review of our designs.","accessible_pdf":false,"authors":[{"affiliations":["Hitachi Energy Research, Montreal, Canada"],"email":"dmarino@cim.mcgill.ca","is_corresponding":true,"name":"David Marino"},{"affiliations":["Carleton University, Ottawa, Canada"],"email":"maxwellkeleher@cmail.carleton.ca","is_corresponding":false,"name":"Maxwell Keleher"},{"affiliations":["Hitachi Energy Research, Krakow, Poland"],"email":"krzysztof.chmielowiec@hitachienergy.com","is_corresponding":false,"name":"Krzysztof Chmielowiec"},{"affiliations":["Hitachi Energy Research, Montreal, Canada"],"email":"antony.hilliard@hitachienergy.com","is_corresponding":false,"name":"Antony Hilliard"},{"affiliations":["Hitachi Energy Research, Krakow, Poland"],"email":"pawel.dawidowski@hitachienergy.com","is_corresponding":false,"name":"Pawel Dawidowski"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["David Marino"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-3496","time_end":"","time_stamp":"","time_start":"","title":"Operator-Centered Design of a Nodal Loadability Network Visualization","uid":"w-energyvis-3496","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The rapid growth of the solar energy industry requires advanced educational tools to train the next generation of engineers and technicians. We present a novel system for situated visualization of photovoltaic (PV) module performance, leveraging a combination of PV simulation, sun-sky position, and head-mounted augmented reality (AR). Our system is guided by four principles of development: simplicity, adaptability, collaboration, and maintainability, realized in six components. Users interactively manipulate a physical module's orientation and shading referents with immediate feedback on the module's performance.","accessible_pdf":false,"authors":[{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"nicholas.brunhart-lupo@nrel.gov","is_corresponding":true,"name":"Nicholas Brunhart-Lupo"},{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"kenny.gruchalla@nrel.gov","is_corresponding":false,"name":"Kenny Gruchalla"},{"affiliations":["Fort Lewis College, Durango, United States"],"email":"williams_l@fortlewis.edu","is_corresponding":false,"name":"Laurie Williams"},{"affiliations":["Fort Lewis College, Durango, United States"],"email":"selias@fortlewis.edu","is_corresponding":false,"name":"Steve Ellis"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nicholas Brunhart-Lupo"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-4332","time_end":"","time_stamp":"","time_start":"","title":"Situated Visualization of Photovoltaic Module Performance for Workforce Development","uid":"w-energyvis-4332","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper introduces CPIE (Coal Pollution Impact Explorer), a spatiotemporal visual analytic tool developed for interactive visualization of coal pollution impacts. CPIE visualizes electricity-generating units (EGUs) and their contributions to statewide Medicare deaths related to coal PM2.5 emissions. The tool is designed to make scientific findings on the impacts of coal pollution more accessible to the general public and to raise awareness of the associated health risks. We present three use cases for CPIE: 1) the overall spatial distribution of all 480 facilities in the United States, their statewide impact on excess deaths, and the overall decreasing trend in deaths associated with coal pollution from 1999 to 2020; 2) the influence of pollution transport, where most deaths associated with the facilities located within the same state and neighboring states but some deaths occur far away; and 3) the effectiveness of intervention regulations, such as installing emissions control devices and shutting down coal facilities, in significantly reducing the number of deaths associated with coal pollution.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"sjin86@gatech.edu","is_corresponding":true,"name":"Sichen Jin"},{"affiliations":["George Mason University, Fairfax, United States"],"email":"lhennem@gmu.edu","is_corresponding":false,"name":"Lucas Henneman"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"jessica.roberts@cc.gatech.edu","is_corresponding":false,"name":"Jessica Roberts"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sichen Jin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-6102","time_end":"","time_stamp":"","time_start":"","title":"CPIE: A Spatiotemporal Visual Analytic Tool to Explore the Impact of Coal Pollution","uid":"w-energyvis-6102","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents a novel open system, ChatGrid, for easy, intuitive, and interactive geospatial visualization of large-scale transmission networks. ChatGrid uses state-of-the-art techniques for geospatial visualization of large networks, including 2.5D map views, animated flows, hierarchical and level-based filtering and aggregation to provide visual information in an easy, cognitive manner. The highlight of ChatGrid is a natural language query based interface powered by a large language model (ChatGPT) that offers a natural and flexible interactive experience whereby users can ask questions and ChatGrid provides responses both in text and visually. This paper discusses the architecture, implementation, design decisions, and usage of large language models for ChatGrid.","accessible_pdf":false,"authors":[{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"sjin86@gatech.edu","is_corresponding":true,"name":"Sichen Jin"},{"affiliations":["Pacific Northwest National Laboratory, Richland, United States"],"email":"shrirang.abhyankar@pnnl.gov","is_corresponding":false,"name":"Shrirang Abhyankar"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sichen Jin"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-energyvis0","slot_id":"w-energyvis-9750","time_end":"","time_stamp":"","time_start":"","title":"ChatGrid: Power Grid Visualization Empowered by a Large Language Model","uid":"w-energyvis-9750","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"EnergyVis","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"w-firstperson":{"event":"First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities","event_description":"","event_prefix":"w-firstperson","event_type":"workshop","event_url":"","long_name":"First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities","organizers":[],"sessions":[]},"w-future":{"event":"VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation","event_description":"","event_prefix":"w-future","event_type":"workshop","event_url":"","long_name":"VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"w-future","ff_link":"","session_id":"w-future0","session_image":"w-future0.png","time_end":"","time_slots":[{"abstract":"Data physicalizations are a time-tested practice for visualizing data, but the sustainability challenges of current physicalization practices have only recently been explored; for example, the usage of carbon-intensive, non-renewable materials like plastic and metal. This work explores clay physicalizations as an approach to these challenges. Using a three-stage process, we investigate the design and sustainability of clay 3D printed physicalizations: 1) exploring the properties and constraints of clay when extruded through a 3D printer, 2) testing a variety of data encodings that work within the constraints, and 3) introducing Rain Gauge, a clay physicalization exploring climate effects on climate data with an impermanent material. Throughout our process, we investigate the material circularity of clay-based digital fabrication by reclaiming and reusing the clay stock in each stage. Finally, we reflect on the implications of ceramic 3D printing for data physicalization through the lenses of practicality and sustainability.","accessible_pdf":false,"authors":[{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"bridger.g.herman@gmail.com","is_corresponding":true,"name":"Bridger Herman"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"jlrossi@umn.edu","is_corresponding":false,"name":"Jessica Rossi-Mastracci"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"will1070@umn.edu","is_corresponding":false,"name":"Heather Willy"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"mreicher@umn.edu","is_corresponding":false,"name":"Molly Reichert"},{"affiliations":["University of Minnesota, Minneapolis, United States"],"email":"dfk@umn.edu","is_corresponding":false,"name":"Daniel F. Keefe"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Bridger Herman"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-future0","slot_id":"w-future-1007","time_end":"","time_stamp":"","time_start":"","title":"Rain Gauge: Exploring the Design and Sustainability of 3D Printed Clay Physicalizations","uid":"w-future-1007","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We explain our model of data-in-a-void and contrast it with the idea of data-voids to explore how the different framings impact our thinking on sustainability. This contrast supports our assertion that how we think about the data that we work with for visualization design impacts the direction of our thinking and our work. To show this we describe how we view the concept of data-in-a-void as different from that of data-voids. Then we provide two examples, one that relates to existing data about bicycle mobility, and one about non-data for local food production. In the discussion, we then untangle and outline how our thinking about data for sustainability is impacted and influenced by the data-in-a-void model.","accessible_pdf":false,"authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"karly.ross@ucalgary.ca","is_corresponding":true,"name":"Karly Ross"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"pratim.sengupta@ucalgary.ca","is_corresponding":false,"name":"Pratim Sengupta"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"wj@wjwillett.net","is_corresponding":false,"name":"Wesley Willett"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Karly Ross"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-future0","slot_id":"w-future-1008","time_end":"","time_stamp":"","time_start":"","title":"(Almost) All Data is Absent Data","uid":"w-future-1008","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This study explores energy issues across various nations, focusing on sustainable energy availability and accessibility. Representatives from all continents were selected based on their HDI values. Data from Kaggle, spanning 2000-2020, was analyzed using Python to address questions on electricity access, renewable energy generation, and fossil fuel consumption. The research employed statistical and data visualization techniques to reveal trends and disparities. Findings underscore the importance of Python and Kaggle in data analysis. The study suggests expanding datasets and incorporating predictive modeling for future research to enhance understanding and decision-making in energy policies.","accessible_pdf":false,"authors":[{"affiliations":["Faculdade Nova Roma, Recife, Brazil"],"email":"gustavodssilva456@gmail.com","is_corresponding":true,"name":"Gustavo Santos Silva"},{"affiliations":["Faculdade Nova Roma, Recife, Brazil"],"email":"lartur671@gmail.com","is_corresponding":false,"name":"Artur Vin\u00edcius Lima Silva"},{"affiliations":["Faculdade Nova Roma, Recife, Brazil"],"email":"lpsouza612@gmail.com","is_corresponding":false,"name":"Lucas Pereira Souza"},{"affiliations":["Faculdade Nova Roma, Recife, Brazil"],"email":"adrianlauzid@gmail.com","is_corresponding":false,"name":"Adrian Lauzid"},{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"djmm@cin.ufpe.br","is_corresponding":false,"name":"Davi Maia"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gustavo Santos Silva"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-future0","slot_id":"w-future-1011","time_end":"","time_stamp":"","time_start":"","title":"Renewable Energy Data Visualization: A study with Open Data","uid":"w-future-1011","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Information visualization holds significant potential to support sustainability goals such as environmental stewardship, and climate resilience by transforming complex data into accessible visual formats that enhance public understanding of complex climate change data and drive actionable insights. While the field has predominantly focused on analytical orientation of visualization, challenging traditional visualization techniques and goals, through ``critical visualization'' research expands existing assumptions and conventions in the field. In this paper, I explore how reimagining overlooked aspects of data visualization\u2014such as engagement, emotional resonance, communication, and community empowerment\u2014can contribute to achieving sustainability objectives. I argue that by focusing on inclusive data visualization that promotes clarity, understandability, and public participation, we can make complex data more relatable and actionable, fostering broader connections and mobilizing collective action on critical issues like climate change. Moreover, I discuss the role of emotional receptivity in environmental data communication, stressing the need for visualizations that respect diverse cultural perspectives and emotional responses to achieve impactful outcomes. Drawing on insights from a decade of research in public participation and community engagement, I aim to highlight how data visualization can democratize data access and increase public involvement in order to contribute to a more sustainable and resilient future.","accessible_pdf":false,"authors":[{"affiliations":["University of Massachusetts Amherst, Amherst, United States"],"email":"nmahyar@cs.umass.edu","is_corresponding":true,"name":"Narges Mahyar"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Narges Mahyar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-future0","slot_id":"w-future-1012","time_end":"","time_stamp":"","time_start":"","title":"Reimagining Data Visualization to Address Sustainability Goals","uid":"w-future-1012","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This position paper discusses the role of data visualizations in journalism based on new areas of study such as visual journalism and data journalism, using examples from the coverage of the catastrophe that occurred in 2024 in Rio Grande do Sul, Brazil, affecting over 2 million people. This case served as a warning to the country about the importance of the climate change agenda and its consequences. The paper includes a literature review in the fields of journalism, data visualization, and psychology to explore the importance of data visualization in combating misinformation and in producing more reliable journalism as tool for fighting climate change","accessible_pdf":false,"authors":[{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"emilly.brito@ufpe.br","is_corresponding":true,"name":"Emilly Brito"},{"affiliations":["Universidade Federal de Pernambuco, Recife, Brazil"],"email":"nivan@cin.ufpe.br","is_corresponding":false,"name":"Nivan Ferreira"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Emilly Brito"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-future0","slot_id":"w-future-1013","time_end":"","time_stamp":"","time_start":"","title":"Visual and Data Journalism as Tools for Fighting Climate Change","uid":"w-future-1013","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"VISions of the Future","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"w-nlviz":{"event":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","event_description":"","event_prefix":"w-nlviz","event_type":"workshop","event_url":"","long_name":"NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"w-nlviz","ff_link":"","session_id":"w-nlviz0","session_image":"w-nlviz0.png","time_end":"","time_slots":[{"abstract":"Large Language Models (LLMs) have been widely applied in summarization due to their speedy and high-quality text generation. Summarization for sensemaking involves information compression and insight extraction. Human guidance in sensemaking tasks can prioritize and cluster relevant information for LLMs. However, users must translate their cognitive thinking into natural language to communicate with LLMs. Can we use more readable and operable visual representations to guide the summarization process for sensemaking? Therefore, we propose introducing an intermediate step--a schematic visual workspace for human sensemaking--before the LLM generation to steer and refine the summarization process. We conduct a series of proof-of-concept experiments to investigate the potential for enhancing the summarization by GPT-4 through visual workspaces. Leveraging a textual sensemaking dataset with a ground truth summary, we evaluate the impact of a human-generated visual workspace on LLM-generated summarization of the dataset and assess the effectiveness of space-steered summarization. We categorize several types of extractable information from typical human workspaces that can be injected into engineered prompts to steer the LLM summarization. The results demonstrate how such workspaces can help align an LLM with the ground truth, leading to more accurate summarization results than without the workspaces.","accessible_pdf":false,"authors":[{"affiliations":["Computer Science Department, Blacksburg, United States"],"email":"tangxxwhu@gmail.com","is_corresponding":true,"name":"Xuxin Tang"},{"affiliations":["Dod, Laurel, United States"],"email":"ericpkrokos@gmail.com","is_corresponding":false,"name":"Eric Krokos"},{"affiliations":["Department of Defense, College Park, United States"],"email":"visual.tycho@gmail.com","is_corresponding":false,"name":"Kirsten Whitley"},{"affiliations":["City University of Hong Kong, Hong Kong, China"],"email":"canliu@cityu.edu.hk","is_corresponding":false,"name":"Can Liu"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"naren@cs.vt.edu","is_corresponding":false,"name":"Naren Ramakrishnan"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Xuxin Tang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1004","time_end":"","time_stamp":"","time_start":"","title":"Steering LLM Summarization with Visual Workspaces for Sensemaking","uid":"w-nlviz-1004","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We explore the use of segmentation and summarization methods for the generation of real-time conversation topic timelines, in the context of glanceable Augmented Reality (AR) visualization. Conversation timelines may serve to summarize and contextualize conversations as they are happening, helping to keep conversations on track. Because dialogue and conversations are broad and unpredictable by nature, and our processing is being done in real-time, not all relevant information may be present in the text at the time it is processed. Thus, we present considerations and challenges which may not be as prevalent in traditional implementations of topic classification and dialogue segmentation. Furthermore, we discuss how AR visualization requirements and design practices require an additional layer of decision making, which must be factored directly into the text processing algorithms. We explore three segmentation strategies -- using dialogue segmentation based on the text of the entire conversation, segmenting on 1-minute intervals, and segmenting on 10-second intervals -- and discuss our results.","accessible_pdf":false,"authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"shanna.hollingwor1@ucalgary.ca","is_corresponding":true,"name":"Shanna Li Ching Hollingworth"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"wj@wjwillett.net","is_corresponding":false,"name":"Wesley Willett"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Shanna Li Ching Hollingworth"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1007","time_end":"","time_stamp":"","time_start":"","title":"Towards Real-Time Speech Segmentation for Glanceable Conversation Visualization","uid":"w-nlviz-1007","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Academic literature reviews have traditionally relied on techniques such as keyword searches and accumulation of relevant back-references, using databases like Google Scholar or IEEEXplore. However, both the precision and accuracy of these search techniques is limited by the presence or absence of specific keywords, making literature review akin to searching for needles in a haystack. We present vitaLITy 2, a solution that uses a Large Language Model or LLM-based approach to identify semantically relevant literature in a textual embedding space. We include a corpus of 66,692 papers from 1970-2023 which are searchable through text embeddings created by three language models. vitaLITy 2 contributes a novel Retrieval Augmented Generation (RAG) architecture and can be interacted with through an LLM with augmented prompts, including summarization of a collection of papers. vitaLITy 2 also provides a chat interface that allow users to perform complex queries without learning any new programming language. This also enables users to take advantage of the knowledge captured in the LLM from its enormous training corpus. Finally, we demonstrate the applicability of vitaLITy 2 through two usage scenarios.","accessible_pdf":false,"authors":[{"affiliations":["University of Nottingham, Nottingham, United Kingdom"],"email":"psxah15@nottingham.ac.uk","is_corresponding":false,"name":"Hongye An"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"arpitnarechania@gatech.edu","is_corresponding":true,"name":"Arpit Narechania"},{"affiliations":["University of Nottingham, Nottingham, United Kingdom"],"email":"kai.xu@nottingham.ac.uk","is_corresponding":false,"name":"Kai Xu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Arpit Narechania"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1008","time_end":"","time_stamp":"","time_start":"","title":"vitaLITy 2: Reviewing Academic Literature Using Large Language Models","uid":"w-nlviz-1008","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Analyzing and finding anomalies in multi-dimensional datasets is a cumbersome but vital task across different domains. In the context of financial fraud detection, analysts must quickly identify suspicious activity among transactional data. This is an iterative process made of complex exploratory tasks such as recognizing patterns, grouping, and comparing. To mitigate the information overload inherent to these steps, we present a tool combining automated information highlights, Large Language Model generated textual insights, and visual analytics, facilitating exploration at different levels of detail. We perform a segmentation of the data per analysis area and visually represent each one, making use of automated visual cues to signal which require more attention. Upon user selection of an area, our system provides textual and graphical summaries. The text, acting as a link between the high-level and detailed views of the chosen segment, allows for a quick understanding of relevant details. A thorough exploration of the data comprising the selection can be done through graphical representations. The feedback gathered in a study performed with seven domain experts suggests our tool effectively supports and guides exploratory analysis, easing the identification of suspicious information.","accessible_pdf":false,"authors":[{"affiliations":["Feedzai, Lisbon, Portugal"],"email":"beatriz.feliciano@feedzai.com","is_corresponding":true,"name":"Beatriz Feliciano"},{"affiliations":["Feedzai, Lisbon, Portugal"],"email":"rita.costa@feedzai.com","is_corresponding":false,"name":"Rita Costa"},{"affiliations":["Feedzai, Porto, Portugal"],"email":"jean.alves@feedzai.com","is_corresponding":false,"name":"Jean Alves"},{"affiliations":["Feedzai, Madrid, Spain"],"email":"javier.liebana@feedzai.com","is_corresponding":false,"name":"Javier Li\u00e9bana"},{"affiliations":["Feedzai, Lisbon, Portugal"],"email":"diogo.duarte@feedzai.com","is_corresponding":false,"name":"Diogo Ramalho Duarte"},{"affiliations":["Feedzai, Lisbon, Portugal"],"email":"pedro.bizarro@feedzai.com","is_corresponding":false,"name":"Pedro Bizarro"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Beatriz Feliciano"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1009","time_end":"","time_stamp":"","time_start":"","title":"\u201cShow Me What\u2019s Wrong!\u201d: Combining Charts and Text to Guide Data Analysis","uid":"w-nlviz-1009","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Dimension reduction (DR) can transform high-dimensional text embeddings into a 2D visual projection facilitating the exploration of document similarities. However, the projection often lacks connection to the text semantics, due to the opaque nature of text embeddings and non-linear dimension reductions. To address these problems, we propose a gradient-based method for visualizing the spatial semantics of dimensionally reduced text embeddings. This method employs gradients to assess the sensitivity of the projected documents with respect to the underlying words. The method can be applied to existing DR algorithms and text embedding models. Using these gradients, we designed a visualization system that incorporates spatial word clouds into the document projection space to illustrate the impactful text features. We further present three usage scenarios that demonstrate the practical applications of our system to facilitate the discovery and interpretation of underlying semantics in text projections.","accessible_pdf":false,"authors":[{"affiliations":["Computer Science, Virginia Tech, Blacksburg, United States"],"email":"wliu3@vt.edu","is_corresponding":false,"name":"Wei Liu"},{"affiliations":["Virginia Tech, Blacksburg, United States"],"email":"north@vt.edu","is_corresponding":false,"name":"Chris North"},{"affiliations":["Tulane University, New Orleans, United States"],"email":"rfaust1@tulane.edu","is_corresponding":true,"name":"Rebecca Faust"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Rebecca Faust"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1010","time_end":"","time_stamp":"","time_start":"","title":"Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings","uid":"w-nlviz-1010","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Recently, large language models (LLMs) have shown great promise in translating natural language (NL) queries into visualizations, but their \u201cblack-box\u201d nature often limits explainability and debuggability. In response, we present a comprehensive text prompt that, given a tabular dataset and an NL query about the dataset, generates an analytic specification including (detected) data attributes, (inferred) analytic tasks, and (recommended) visualizations. This specification captures key aspects of the query translation process, affording both explainability and debuggability. For instance, it provides mappings from the detected entities to the corresponding phrases in the input query, as well as the specific visual design principles that determined the visualization recommendations. Moreover, unlike prior LLM-based approaches, our prompt supports conversational interaction and ambiguity detection capabilities. In this paper, we detail the iterative process of curating our prompt, present a preliminary performance evaluation using GPT-4, and discuss the strengths and limitations of LLMs at various stages of query translation.","accessible_pdf":false,"authors":[{"affiliations":["UNC Charlotte, Charlotte, United States"],"email":"ssah1@uncc.edu","is_corresponding":true,"name":"Subham Sah"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"rmitra34@gatech.edu","is_corresponding":false,"name":"Rishab Mitra"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"arpitnarechania@gatech.edu","is_corresponding":false,"name":"Arpit Narechania"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"endert@gatech.edu","is_corresponding":false,"name":"Alex Endert"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"john.stasko@cc.gatech.edu","is_corresponding":false,"name":"John Stasko"},{"affiliations":["UNC Charlotte, Charlotte, United States"],"email":"wdou1@uncc.edu","is_corresponding":false,"name":"Wenwen Dou"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Subham Sah"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1011","time_end":"","time_stamp":"","time_start":"","title":"Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models","uid":"w-nlviz-1011","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We explore how natural language authoring with large language models (LLMs) can support the inline authoring of word-scale visualizations (WSVs). While word-scale visualizations that live alongside and within document text can support rich integration of data into written narratives and communication, these small visualizations have typically been challenging to author. We explore how modern LLMs---which are able to generate diverse visualization designs based on simple natural language descriptions---might allow authors to specify and insert new visualizations inline as they write text. Drawing on our experiences with an initial prototype built using GPT-4, we highlight the expressive potential of inline natural language visualization authoring and identify opportunities for further research.","accessible_pdf":false,"authors":[{"affiliations":["University of Calgary, Calgary, Canada"],"email":"paige.sobrien@ucalgary.ca","is_corresponding":true,"name":"Paige So'Brien"},{"affiliations":["University of Calgary, Calgary, Canada"],"email":"wj@wjwillett.net","is_corresponding":false,"name":"Wesley Willett"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Paige So'Brien"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1016","time_end":"","time_stamp":"","time_start":"","title":"Towards Inline Natural Language Authoring for Word-Scale Visualizations","uid":"w-nlviz-1016","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"As language models have become increasingly successful at a wide array of tasks, different prompt engineering methods have been developed alongside them in order to adapt these models to new tasks. One of them is Tree-of-Thoughts (ToT), a prompting strategy and framework for language model inference and problem-solving. It allows the model to explore multiple solution paths and select the best course of action, producing a tree-like structure of intermediate steps (i.e., thoughts). This method was shown to be effective for several problem types. However, the official implementation has a high barrier to usage as it requires setup overhead and incorporates task-specific problem templates which are difficult to generalize to new problem types. It also does not allow user interaction to improve or suggest new thoughts. We introduce iToT (interactive Tree-of- Thoughts), a generalized and interactive Tree of Thought prompting system. iToT allows users to explore each step of the model\u2019s problem-solving process as well as to correct and extend the model\u2019s thoughts. iToT revolves around a visual interface that facilitates simple and generic ToT usage and transparentizes the problem-solving process to users. This facilitates a better understanding of which thoughts and considerations lead to the model\u2019s final decision. Through two case studies, we demonstrate the usefulness of iToT in different human-LLM co-writing tasks.","accessible_pdf":false,"authors":[{"affiliations":["ETHZ, Zurich, Switzerland"],"email":"aboyle@student.ethz.ch","is_corresponding":false,"name":"Alan David Boyle"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"igupta@ethz.ch","is_corresponding":true,"name":"Isha Gupta"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"shoenig@student.ethz.ch","is_corresponding":false,"name":"Sebastian H\u00f6nig"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"lukas.mautner98@gmail.com","is_corresponding":false,"name":"Lukas Mautner"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"kenza.amara@ai.ethz.ch","is_corresponding":false,"name":"Kenza Amara"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"furui.cheng@inf.ethz.ch","is_corresponding":false,"name":"Furui Cheng"},{"affiliations":["ETH Z\u00fcrich, Z\u00fcrich, Switzerland"],"email":"melassady@ai.ethz.ch","is_corresponding":false,"name":"Mennatallah El-Assady"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Isha Gupta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1019","time_end":"","time_stamp":"","time_start":"","title":"iToT: An Interactive System for Customized Tree-of-Thought Generation","uid":"w-nlviz-1019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Strategy management analyses are created by business consultants with common analysis frameworks (i.e. comparative analyses) and associated diagrams. We show these can be largely constructed using LLMs, starting with the extraction of insights from data, organization of those insights according to a strategy management framework, and then depiction in the typical strategy management diagram for that framework (static textual visualizations). We discuss caveats and future directions to generalize for broader uses.","accessible_pdf":false,"authors":[{"affiliations":["Uncharted Software, Toronto, Canada"],"email":"richard.brath@alumni.utoronto.ca","is_corresponding":true,"name":"Richard Brath"},{"affiliations":["Uncharted Software, Toronto, Canada"],"email":"miltonjbradley@gmail.com","is_corresponding":false,"name":"Adam James Bradley"},{"affiliations":["Uncharted Software, Toronto, Canada"],"email":"david@jonker.work","is_corresponding":false,"name":"David Jonker"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Richard Brath"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1020","time_end":"","time_stamp":"","time_start":"","title":"Strategic management analysis: from data to strategy diagram by LLM","uid":"w-nlviz-1020","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present a mixed-methods study to explore how large language models (LLMs) can assist users in the visual exploration and analysis of complex data structures, using knowledge graphs (KGs) as a baseline. We surveyed and interviewed 20 professionals who regularly work with LLMs with the goal of using them for (or alongside) KGs. From the analysis of our interviews, we contribute a preliminary roadmap for the design of LLM-driven visual analysis systems and outline future opportunities in this emergent design space.","accessible_pdf":false,"authors":[{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"harry.li@ll.mit.edu","is_corresponding":true,"name":"Harry Li"},{"affiliations":["Tufts University, Medford, United States"],"email":"gabriel.appleby@tufts.edu","is_corresponding":false,"name":"Gabriel Appleby"},{"affiliations":["MIT Lincoln Laboratory, Lexington, United States"],"email":"ashley.suh@ll.mit.edu","is_corresponding":false,"name":"Ashley Suh"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Harry Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1021","time_end":"","time_stamp":"","time_start":"","title":"A Preliminary Roadmap for LLMs as Visual Data Analysis Assistants","uid":"w-nlviz-1021","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This study explores the potential of visual representation in understanding the structural elements of Arabic poetry, a subject of significant educational and research interest. Our objective is to make Arabic poetic works more accessible to readers of both Arabic and non-Arabic linguistic backgrounds by employing visualization, exploration, and analytical techniques. We transformed poetry texts into syllables, identified their metrical structures, segmented verses into patterns, and then converted these patterns into visual representations. Following this, we computed and visualized the dissimilarities between these images, and overlaid their differences. Our findings suggest that the positional patterns across a poem play a pivotal role in effective poetry clustering, as demonstrated by our newly computed metrics. The results of our clustering experiments showed a marked improvement over previous attempts, thereby providing new insights into the composition and structure of Arabic poetry. This study underscored the value of visual representation in enhancing our understanding of Arabic poetry.","accessible_pdf":false,"authors":[{"affiliations":["University of Neuch\u00e2tel, Neuch\u00e2tel, Switzerland"],"email":"abdelmalek.berkani@unine.ch","is_corresponding":true,"name":"Abdelmalek Berkani"},{"affiliations":["University of Neuch\u00e2tel, Neuch\u00e2tel, Switzerland"],"email":"adrian.holzer@unine.ch","is_corresponding":false,"name":"Adrian Holzer"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Abdelmalek Berkani"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-nlviz0","slot_id":"w-nlviz-1022","time_end":"","time_stamp":"","time_start":"","title":"Enhancing Arabic Poetic Structure Analysis through Visualization","uid":"w-nlviz-1022","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"MLVIZ","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"w-pdav":{"event":"Progressive Data Analysis and Visualization (PDAV) Workshop.","event_description":"","event_prefix":"w-pdav","event_type":"workshop","event_url":"","long_name":"Progressive Data Analysis and Visualization (PDAV) Workshop.","organizers":[],"sessions":[]},"w-storygenai":{"event":"Workshop on Data Storytelling in an Era of Generative AI","event_description":"","event_prefix":"w-storygenai","event_type":"workshop","event_url":"","long_name":"Workshop on Data Storytelling in an Era of Generative AI","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"w-storygenai","ff_link":"","session_id":"w-storygenai0","session_image":"w-storygenai0.png","time_end":"","time_slots":[{"abstract":"Communicating data insights in an accessible and engaging manner to a broader audience remains a significant challenge. To address this problem, we introduce the Emoji Encoder, a tool that generates a set of emoji recommendations for the field and category names appearing in a tabular dataset. The selected set of emoji encodings can be used to generate configurable unit charts that combine plain text and emojis as word-scale graphics. These charts can serve to contrast values across multiple quantitative fields for each row in the data or to communicate trends over time. Any resulting chart is simply a block of text characters, meaning that it can be directly copied into a text message or posted on a communication platform such as Slack or Teams. This work represents a step toward our larger goal of developing novel, fun, and succinct data storytelling experiences that engage those who do not identify as data analysts. Emoji-based unit charts can offer contextual cues related to the data at the center of a conversation on platforms where emoji-rich communication is typical.","accessible_pdf":false,"authors":[{"affiliations":["University of Waterloo, Waterloo, Canada","Tableau Research, Seattle, United States"],"email":"mbrehmer@uwaterloo.ca","is_corresponding":true,"name":"Matthew Brehmer"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"},{"affiliations":["McGraw Hill, Seattle, United States","Tableau Software, Seattle, United States"],"email":"zoezoezoe.cc@gmail.com","is_corresponding":false,"name":"Zoe Zoe"},{"affiliations":["Northeastern University, Portland, United States"],"email":"m.correll@northeastern.edu","is_corresponding":false,"name":"Michael Correll"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Matthew Brehmer"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-storygenai0","slot_id":"w-storygenai-5237","time_end":"","time_stamp":"","time_start":"","title":"The Data-Wink Ratio: Emoji Encoder for Generating Semantically-Resonant Unit Charts","uid":"w-storygenai-5237","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Data-driven storytelling serves as a crucial bridge for communicating ideas in a persuasive way. However, the manual creation of data stories is a multifaceted, labor-intensive, and case-specific effort, limiting their broader application. As a result, automating the creation of data stories has emerged as a significant research thrust. Despite advances in Artificial Intelligence, the systematic generation of data stories remains challenging due to their hybrid nature: they must frame a perspective based on a seed idea in a top-down manner, similar to traditional storytelling, while coherently grounding insights of given evidence in a bottom-up fashion, akin to data analysis. These dual requirements necessitate precise constraints on the permissible space of a data story. In this viewpoint, we propose integrating constraints into the data story generation process. Defined upon the hierarchies of interpretation and articulation, constraints shape both narrations and illustrations to align with seed ideas and contextualized evidence. We identify the taxonomy and required functionalities of these constraints. Although constraints can be heterogeneous and latent, we explore the potential to represent them in a computation-friendly fashion via Domain-Specific Languages. We believe that leveraging constraints will balance the artistic and engineering aspects of data story generation.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"yu.zhe.s.shi@gmail.com","is_corresponding":true,"name":"Yu-Zhe Shi"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"haotian.li@connect.ust.hk","is_corresponding":false,"name":"Haotian Li"},{"affiliations":["Peking University, Beijing, China"],"email":"ruanlecheng@whai.pku.edu.cn","is_corresponding":false,"name":"Lecheng Ruan"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yu-Zhe Shi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-storygenai0","slot_id":"w-storygenai-6168","time_end":"","time_stamp":"","time_start":"","title":"Constraint representation towards precise data-driven storytelling","uid":"w-storygenai-6168","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Creating data stories from raw data is challenging due to humans\u2019 limited attention spans and the need for specialized skills. Recent advancements in large language models (LLMs) offer great opportunities to develop systems with autonomous agents to streamline the data storytelling workflow. Though multi-agent systems have benefits such as fully realizing LLM potentials with decomposed tasks for individual agents, designing such systems also faces challenges in task decomposition, performance optimization for sub-tasks, and workflow design. To better understand these issues, we develop Data Director, an LLM-based multi-agent system designed to automate the creation of animated data videos, a representative genre of data stories. Data Director interprets raw data, breaks down tasks, designs agent roles to make informed decisions automatically, and seamlessly integrates diverse components of data videos. A case study demonstrates Data Director\u2019s effectiveness in generating data videos. Throughout development, we have derived lessons learned from addressing challenges, guiding further advancements in autonomous agents for data storytelling. We also shed light on future directions for global optimization, human-in-the-loop design, and the application of advanced multi-modal LLMs.","accessible_pdf":false,"authors":[{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"lshenaj@connect.ust.hk","is_corresponding":true,"name":"Leixian Shen"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"haotian.li@connect.ust.hk","is_corresponding":false,"name":"Haotian Li"},{"affiliations":["Microsoft, Beijing, China"],"email":"yunvvang@gmail.com","is_corresponding":false,"name":"Yun Wang"},{"affiliations":["The Hong Kong University of Science and Technology, Hong Kong, China"],"email":"huamin@cse.ust.hk","is_corresponding":false,"name":"Huamin Qu"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Leixian Shen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-storygenai0","slot_id":"w-storygenai-7043","time_end":"","time_stamp":"","time_start":"","title":"From Data to Story: Towards Automatic Animated Data Video Creation with LLM-based Multi-Agent Systems","uid":"w-storygenai-7043","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Crafting accurate and insightful narratives from data visualization is essential in data storytelling. Like creative writing, where one reads to write a story, data professionals must effectively ``read\" visualizations to create compelling data stories. In education, helping students develop these skills can be achieved through exercises that ask them to create narratives from data plots, demonstrating both ``show\" (describing the plot) and ``tell\" (interpreting the plot). Providing formative feedback on these exercises is crucial but challenging in large-scale educational settings with limited resources. This study explores using GPT-4o, a multimodal LLM, to generate and evaluate narratives from data plots. The LLM was tested in zero-shot, one-shot, and two-shot scenarios, generating narratives and self-evaluating their depth. Human experts also assessed the LLM's outputs. Additionally, the study developed machine learning and LLM-based models to assess student-generated narratives using LLM-generated data. Human experts validated a subset of these machine assessments. The findings highlight the potential of LLMs to support scalable formative assessment in teaching data storytelling skills, which has important implications for AI-supported educational interventions.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland Baltimore County, Baltimore, United States"],"email":"narens1@umbc.edu","is_corresponding":true,"name":"Naren Sivakumar"},{"affiliations":["University of Maryland, Baltimore County, Baltimore, United States"],"email":"lujiec@umbc.edu","is_corresponding":false,"name":"Lujie Karen Chen"},{"affiliations":["University of Maryland,Baltimore County, Baltimore, United States"],"email":"io11937@umbc.edu","is_corresponding":false,"name":"Pravalika Papasani"},{"affiliations":["University of maryland baltimore county, Hanover, United States"],"email":"vignam1@umbc.edu","is_corresponding":false,"name":"Vigna Majmundar"},{"affiliations":["Towson University, Towson, United States"],"email":"jfeng@towson.edu","is_corresponding":false,"name":"Jinjuan Heidi Feng"},{"affiliations":["SRI International, Menlo Park, United States"],"email":"louise.yarnall@sri.com","is_corresponding":false,"name":"Louise Yarnall"},{"affiliations":["University of Alabama, Tuscaloosa, United States"],"email":"jgong@umbc.edu","is_corresponding":false,"name":"Jiaqi Gong"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Naren Sivakumar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-storygenai0","slot_id":"w-storygenai-7072","time_end":"","time_stamp":"","time_start":"","title":"Show and Tell: Exploring Large Language Model\u2019s Potential in Formative Educational Assessment of Data Stories","uid":"w-storygenai-7072","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"Data Story GenAI","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"w-topoinvis":{"event":"TopoInVis: Workshop on Topological Data Analysis and Visualization","event_description":"","event_prefix":"w-topoinvis","event_type":"workshop","event_url":"","long_name":"TopoInVis: Workshop on Topological Data Analysis and Visualization","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"w-topoinvis","ff_link":"","session_id":"w-topoinvis0","session_image":"w-topoinvis0.png","time_end":"","time_slots":[{"abstract":"Advances in high-performance computing require new ways to represent large-scale scientific data to support data storage, data transfers, and data analysis within scientific workflows. Multivariate functional approximation (MFA) has recently emerged as a new continuous meshless representation that approximates raw discrete data with a set of piecewise smooth functions. An MFA model of data thus offers a compact representation and supports high-order evaluation of values and derivatives anywhere in the domain. In this paper, we present CPE-MFA, the first critical point extraction framework designed for MFA models of large-scale, high-dimensional data. CPE-MFA extracts critical points directly from an MFA model without the need for discretization or resampling. This is the first step toward enabling continuous implicit models such as MFA to support topological data analysis at scale.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"guanqunma94@gmail.com","is_corresponding":true,"name":"Guanqun Ma"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"dlenz@anl.gov","is_corresponding":false,"name":"David Lenz"},{"affiliations":["Argonne National Laboratory, Lemont, United States"],"email":"tpeterka@mcs.anl.gov","is_corresponding":false,"name":"Tom Peterka"},{"affiliations":["The Ohio State University, Columbus, United States"],"email":"guo.2154@osu.edu","is_corresponding":false,"name":"Hanqi Guo"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"wang.bei@gmail.com","is_corresponding":false,"name":"Bei Wang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Guanqun Ma"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-topoinvis0","slot_id":"w-topoinvis-1027","time_end":"","time_stamp":"","time_start":"","title":"Critical Point Extraction from Multivariate Functional Approximation","uid":"w-topoinvis-1027","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"3D symmetric tensor fields have a wide range of applications in science and engineering. The topology of such fields can provide critical insight into not only the structures in tensor fields but also their respective applications. Existing research focuses on the extraction of topological features such as degenerate curves and neutral surfaces. In this paper, we investigate the asymptotic behaviors of these topological features in the sphere of infinity. Our research leads to both theoretical analysis and observations that can aid further classifications of tensor field topology.","accessible_pdf":false,"authors":[{"affiliations":["Oregon State University, Corvallis, United States"],"email":"linxinw@oregonstate.edu","is_corresponding":false,"name":"Xinwei Lin"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhangyue@oregonstate.edu","is_corresponding":false,"name":"Yue Zhang"},{"affiliations":["Oregon State University, Corvallis, United States"],"email":"zhange@eecs.oregonstate.edu","is_corresponding":true,"name":"Eugene Zhang"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Eugene Zhang"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-topoinvis0","slot_id":"w-topoinvis-1031","time_end":"","time_stamp":"","time_start":"","title":"Asymptotic Topology of 3D Linear Symmetric Tensor Fields","uid":"w-topoinvis-1031","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Jacobi sets are an important method to investigate the relationship between Morse functions. The Jacobi set for two Morse functions is the set of all points where the functions' gradients are linearly dependent. Both the segmentation of the domain by Jacobi sets and the Jacobi sets themselves have proven to be useful tools in multi-field visualization, data analysis in various applications, and for accelerating extraction algorithms. On a triangulated grid, they can be calculated by a piecewise linear interpolation. In practice, Jacobi sets can become very complex and large due to noise and numerical errors. Some techniques for simplifying Jacobi sets exist, but these only reduce individual elements such as noise or are purely theoretical. These techniques often only change the visual representation of the Jacobi sets, but not the underlying data. In this paper, we present an algorithm that simplifies the Jacobi sets for 2D bivariate scalar fields and at the same time modifies the underlying bivariate scalar fields while preserving the essential structures of the fields. We use a neighborhood graph to select the areas to be reduced and collapse these cells individually. We investigate the influence of different neighborhood graphs and present an adaptation for the visualization of Jacobi sets that take the collapsed cells into account. We apply our algorithm to a range of analytical and real-world data sets and compare it with established methods that also simplify the underlying bivariate scalar fields.","accessible_pdf":false,"authors":[{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"raith@informatik.uni-leipzig.de","is_corresponding":true,"name":"Felix Raith"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"scheuermann@informatik.uni-leipzig.de","is_corresponding":false,"name":"Gerik Scheuermann"},{"affiliations":["Leipzig University, Leipzig, Germany"],"email":"heine@informatik.uni-leipzig.de","is_corresponding":false,"name":"Christian Heine"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Felix Raith"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-topoinvis0","slot_id":"w-topoinvis-1033","time_end":"","time_stamp":"","time_start":"","title":"Topological Simplifcation of Jacobi Sets for Piecewise-Linear Bivariate 2D Scalar Fields","uid":"w-topoinvis-1033","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The Morse-Smale complex is a standard tool in visual data analysis. The classic definition is based on a continuous view of the gradient of a scalar function where its zeros are the critical points. These points are connected via gradient curves and surfaces emanating from saddle points, known as separatrices. In a discrete setting, the Morse-Smale complex is commonly extracted by constructing a combinatorial gradient assuming the steepest descent direction. Previous works have shown that this method results in a geometric embedding of the separatrices that can be fundamentally different from those in the continuous case. To achieve a similar embedding, different approaches for constructing a combinatorial gradient were proposed. In this paper, we show that these approaches generate a different topology, i.e., the connectivity between critical points changes. Additionally, we demonstrate that the steepest descent method can compute topologically and geometrically accurate Morse-Smale complexes when applied to certain types of grids. Based on these observations, we suggest a method to attain both geometric and topological accuracy for the Morse-Smale complex of data sampled on a uniform grid.","accessible_pdf":false,"authors":[{"affiliations":["KTH Royal Institute of Technology, Stockholm, Sweden"],"email":"sonlt@kth.se","is_corresponding":true,"name":"Son Le Thanh"},{"affiliations":["KTH Royal Institute of Technology, Stockholm, Sweden"],"email":"ankele@iai.uni-bonn.de","is_corresponding":false,"name":"Michael Ankele"},{"affiliations":["KTH Royal Institute of Technology, Stockholm, Sweden"],"email":"weinkauf@kth.se","is_corresponding":false,"name":"Tino Weinkauf"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Son Le Thanh"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-topoinvis0","slot_id":"w-topoinvis-1034","time_end":"","time_stamp":"","time_start":"","title":"Revisiting Accurate Geometry for the Morse-Smale Complexes","uid":"w-topoinvis-1034","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper presents a nested tracking framework for analyzing cycles in 2D force networks within granular materials. These materials are composed of interacting particles, whose interactions are described by a force network. Understanding the cycles within these networks at various scales and their evolution under external loads is crucial, as they significantly contribute to the mechanical and kinematic properties of the system. Our approach involves computing a cycle hierarchy by partitioning the 2D domain into regions bounded by cycles in the force network. We can adapt concepts from nested tracking graphs originally developed for merge trees by leveraging the duality between this partitioning and the cycles. We demonstrate the effectiveness of our method on two force networks derived from experiments with photo-elastic disks.","accessible_pdf":false,"authors":[{"affiliations":["Link\u00f6ping University, Link\u00f6ping, Sweden"],"email":"farhan.rasheed@liu.se","is_corresponding":true,"name":"Farhan Rasheed"},{"affiliations":["Indian Institute of Science, Bangalore, India"],"email":"abrarnaseer@iisc.ac.in","is_corresponding":false,"name":"Abrar Naseer"},{"affiliations":["Link\u00f6ping university, Norrk\u00f6ping, Sweden"],"email":"emma.nilsson@liu.se","is_corresponding":false,"name":"Emma Nilsson"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"talha.bin.masood@liu.se","is_corresponding":false,"name":"Talha Bin Masood"},{"affiliations":["Link\u00f6ping University, Norrk\u00f6ping, Sweden"],"email":"ingrid.hotz@liu.se","is_corresponding":false,"name":"Ingrid Hotz"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Farhan Rasheed"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-topoinvis0","slot_id":"w-topoinvis-1038","time_end":"","time_stamp":"","time_start":"","title":"Multi-scale Cycle Tracking in Dynamic Planar Graphs","uid":"w-topoinvis-1038","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Tetrahedral meshes are widely used due to their flexibility and adaptability in representing changes of complex geometries and topology. However, most existing data structures struggle to efficiently encode the irregular connectivity of tetrahedral meshes with billions of vertices. We address this problem by proposing a novel framework for efficient and scalable analysis of large tetrahedral meshes using Apache Spark. The proposed framework, called Tetra-Spark, features optimized approaches to locally compute many connectivity relations by first retrieving the Vertex-Tetrahedron (VT) relation. This strategy significantly improves Tetra-Spark's efficiency in performing morphology computations on large tetrahedral meshes. To prove the effectiveness and scalability of such a framework, we conduct a comprehensive comparison against a vanilla Spark implementation for the analysis of tetrahedral meshes. Our experimental evaluation shows that Tetra-Spark achieves up to a 78x speedup and reduces memory usage by up to 80% when retrieving connectivity relations with the VT relation available. This optimized design further accelerates subsequent morphology computations, resulting in up to a 47.7x speedup.","accessible_pdf":false,"authors":[{"affiliations":["University of Maryland, College Park, College Park, United States"],"email":"yhqian@umd.edu","is_corresponding":true,"name":"Yuehui Qian"},{"affiliations":["Clemson University, Clemson, United States"],"email":"guoxil@clemson.edu","is_corresponding":false,"name":"Guoxi Liu"},{"affiliations":["Clemson University, Clemson, United States"],"email":"fiurici@clemson.edu","is_corresponding":false,"name":"Federico Iuricich"},{"affiliations":["University of Maryland, College Park, United States"],"email":"deflo@umiacs.umd.edu","is_corresponding":false,"name":"Leila De Floriani"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Yuehui Qian"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-topoinvis0","slot_id":"w-topoinvis-1041","time_end":"","time_stamp":"","time_start":"","title":"Efficient representation and analysis for a large tetrahedral mesh using Apache Spark","uid":"w-topoinvis-1041","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"TopoInVis","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"w-uncertainty":{"event":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","event_description":"","event_prefix":"w-uncertainty","event_type":"workshop","event_url":"","long_name":"Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"w-uncertainty","ff_link":"","session_id":"w-uncertainty0","session_image":"w-uncertainty0.png","time_end":"","time_slots":[{"abstract":"Symmetric second-order tensors are fundamental in various scientific and engineering domains, as they can represent properties such as material stresses or diffusion processes in brain tissue. In recent years, several approaches have been introduced and improved to analyze these fields using topological features, such as degenerate tensor locations, i.e., the tensor has repeated eigenvalues, or normal surfaces. Traditionally, the identification of such features has been limited to single tensor fields. However, it has become common to create ensembles to account for uncertainties and variability in simulations and measurements. In this work, we explore novel methods for describing and visualizing degenerate tensor locations in 3D symmetric second-order tensor field ensembles. We base our considerations on the tensor mode and analyze its practicality in characterizing the uncertainty of degenerate tensor locations before proposing a variety of visualization strategies to effectively communicate degenerate tensor information. We demonstrate our techniques for synthetic and simulation data sets.The results indicate that the interplay of different descriptions for uncertainty can effectively convey information on degenerate tensor locations.","accessible_pdf":false,"authors":[{"affiliations":["University of Cologne, Cologne, Germany"],"email":"tadea.schmitz@uni-koeln.de","is_corresponding":false,"name":"Tadea Schmitz"},{"affiliations":["RWTH Aachen University, Aachen, Germany"],"email":"gerrits@vis.rwth-aachen.de","is_corresponding":true,"name":"Tim Gerrits"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Tim Gerrits"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1007","time_end":"","time_stamp":"","time_start":"","title":"Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles","uid":"w-uncertainty-1007","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Understanding and communicating data uncertainty is crucial for informed decision-making across various domains, including finance, healthcare, and public policy. This study investigates the impact of gender and acoustic variables on decision-making, confidence, and trust through a crowdsourced experiment. We compared visualization-only representations of uncertainty to text-forward and speech-forward bimodal representations, including multiple synthetic voices across gender. Speech-forward representations led to an increase in risky decisions, and text-forward representations led to lower confidence. Contrary to prior work, speech-forward forecasts did not receive higher ratings of trust. Higher normalized pitch led to a slight increase in decision confidence, but other voice characteristics had minimal impact on decisions and trust. An exploratory analysis of accented speech showed consistent results with the main experiment and additionally indicated lower trust ratings for information presented in Indian and Kenyan accents. The results underscore the importance of considering acoustic and contextual factors in presentation of data uncertainty.","accessible_pdf":false,"authors":[{"affiliations":["University of California Berkeley, Berkeley, United States"],"email":"chase_stokes@berkeley.edu","is_corresponding":true,"name":"Chase Stokes"},{"affiliations":["Stanford University, Stanford, United States"],"email":"sanker@stanford.edu","is_corresponding":false,"name":"Chelsea Sanker"},{"affiliations":["Versalytix, Columbus, United States"],"email":"bcogley@versalytix.com","is_corresponding":false,"name":"Bridget Cogley"},{"affiliations":["Tableau Research, Palo Alto, United States"],"email":"vsetlur@tableau.com","is_corresponding":false,"name":"Vidya Setlur"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Chase Stokes"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1009","time_end":"","time_stamp":"","time_start":"","title":"Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty","uid":"w-uncertainty-1009","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"The increasing adoption of Deep Neural Networks (DNNs) has led to their application in many challenging scientific visualization tasks. While advanced DNNs offer impressive generalization capabilities, understanding factors such as model prediction quality, robustness, and uncertainty is crucial. These insights can enable domain scientists to make informed decisions about their data. However, DNNs inherently lack ability to estimate prediction uncertainty, necessitating new research to construct robust uncertainty-aware visualization techniques tailored for various visualization tasks. In this work, we propose uncertainty-aware implicit neural representations to model scalar field data sets effectively and comprehensively study the efficacy and benefits of estimated uncertainty information for volume visualization tasks. We evaluate the effectiveness of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout (MCDropout). These techniques enable uncertainty-informed volume visualization in scalar field data sets. Our extensive exploration across multiple data sets demonstrates that uncertainty-aware models produce informative volume visualization results. Moreover, integrating prediction uncertainty enhances the trustworthiness of our DNN model, making it suitable for robustly analyzing and visualizing real-world scientific volumetric data sets.","accessible_pdf":false,"authors":[{"affiliations":["IIT kanpur , Kanpur , India"],"email":"saklanishanu@gmail.com","is_corresponding":false,"name":"Shanu Saklani"},{"affiliations":["Indian Institute of Technology Kanpur, Kanpur, India"],"email":"chitwangoel1010@gmail.com","is_corresponding":false,"name":"Chitwan Goel"},{"affiliations":["Indian Institute of Technology Kanpur, Kanpur, India"],"email":"shrey.bansal75@gmail.com","is_corresponding":false,"name":"Shrey Bansal"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"jay.wang@rutgers.edu","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India"],"email":"soumya.cvpr@gmail.com","is_corresponding":true,"name":"Soumya Dutta"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Soumya Dutta"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1010","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty-Informed Volume Visualization using Implicit Neural Representation","uid":"w-uncertainty-1010","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Current research provides methods to communicate uncertainty and adapts classical algorithms of the visualization pipeline to take the uncertainty into account. Various existing visualization frameworks include methods to present uncertain data but do not offer transformation techniques tailored to uncertain data. Therefore, we propose a software package for uncertainty-aware data analysis in Python (UADAPy) offering methods for uncertain data along the visualization pipeline. We aim to provide a platform that is the foundation for further integration of uncertainty algorithms and visualizations. It provides common utility functionality to support research in uncertainty-aware visualization algorithms and makes state-of-the-art research results accessible to the end user. The project is available at https://github.com/UniStuttgart-VISUS/uadapy.","accessible_pdf":false,"authors":[{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"patrick.paetzold@uni-konstanz.de","is_corresponding":true,"name":"Patrick Paetzold"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"david.haegele@visus.uni-stuttgart.de","is_corresponding":false,"name":"David H\u00e4gele"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"m_ever14@uni-muenster.de","is_corresponding":false,"name":"Marina Evers"},{"affiliations":["University of Stuttgart, Stuttgart, Germany"],"email":"weiskopf@visus.uni-stuttgart.de","is_corresponding":false,"name":"Daniel Weiskopf"},{"affiliations":["University of Konstanz, Konstanz, Germany"],"email":"oliver.deussen@uni-konstanz.de","is_corresponding":false,"name":"Oliver Deussen"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Patrick Paetzold"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1011","time_end":"","time_stamp":"","time_start":"","title":"UADAPy: An Uncertainty-Aware Visualization and Analysis Toolbox","uid":"w-uncertainty-1011","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Uncertainty visualization is an emerging research topic in data vi- sualization because neglecting uncertainty in visualization can lead to inaccurate assessments. In this short paper, we study the prop- agation of multivariate data uncertainty in visualization. Although there have been a few advancements in probabilistic uncertainty vi- sualization of multivariate data, three critical challenges remain to be addressed. First, state-of-the-art probabilistic uncertainty visual- ization framework is limited to bivariate data (two variables). Sec- ond, the existing uncertainty visualization algorithms use compu- tationally intensive techniques and lack support for cross-platform portability. Third, as a consequence of the computational expense, integration into interactive production visualization tools is imprac- tical. In this work, we address all three issues and make a threefold contribution. First, we generalize the state-of-the-art probabilis- tic framework for bivariate data to multivariate data with a arbi- trary number of variables. Second, through utilization of VTK-m\u2019s shared-memory parallelism and cross-platform compatibility fea- tures, we demonstrate acceleration of multivariate uncertainty visu- alization on different many-core architectures, including OpenMP and AMD GPUs. Third, we demonstrate the integration of our al- gorithms with the ParaView software. We demonstrate utility of our algorithms through experiments on multivariate simulation data.","accessible_pdf":false,"authors":[{"affiliations":["Indiana University Bloomington, Bloomington, United States"],"email":"gautamhari@outlook.com","is_corresponding":true,"name":"Gautam Hari"},{"affiliations":["Indiana University Bloomington, Bloomington, United States"],"email":"nrushad2001@gmail.com","is_corresponding":false,"name":"Nrushad A Joshi"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"jay.wang@rutgers.edu","is_corresponding":false,"name":"Zhe Wang"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"gongq@ornl.gov","is_corresponding":false,"name":"Qian Gong"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kmorel@acm.org","is_corresponding":false,"name":"Kenneth Moreland"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"klasky@ornl.gov","is_corresponding":false,"name":"Scott Klasky"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pnorbert@ornl.gov","is_corresponding":false,"name":"Norbert Podhorszki"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Gautam Hari"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1012","time_end":"","time_stamp":"","time_start":"","title":"FunM^2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices","uid":"w-uncertainty-1012","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Uncertainty is inherent to most data, including vector field data, yet it is often omitted in visualizations and representations. Effective uncertainty visualization can enhance the understanding and interpretability of vector field data. For instance, in the context of severe weather events such as hurricanes and wildfires, effective uncertainty visualization can provide crucial insights about fire spread or hurricane behavior and aid in resource management and risk mitigation. Glyphs are commonly used for representing vector uncertainty but are often limited to 2D. In this work, we present a glyph-based technique for accurately representing 3D vector uncertainty and a comprehensive framework for visualization, exploration, and analysis using our new glyphs. We employ hurricane and wildfire examples to demonstrate the efficacy of our glyph design and visualization tool in conveying vector field uncertainty.","accessible_pdf":false,"authors":[{"affiliations":["Scientific Computing and Imaging Institute, Salk Lake City, United States"],"email":"touermi@sci.utah.edu","is_corresponding":true,"name":"Timbwaoga A. J. Ouermi"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"zbmorro@sandia.gov","is_corresponding":false,"name":"Zachary Morrow"},{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"bartv@sandia.gov","is_corresponding":false,"name":"Bart van Bloemen Waanders"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Timbwaoga A. J. Ouermi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1013","time_end":"","time_stamp":"","time_start":"","title":"Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Field","uid":"w-uncertainty-1013","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Isosurface visualization is fundamental for exploring and analyzing 3D volumetric data. Marching cubes (MC) algorithms with linear interpolation are commonly used for isosurface extraction and visualization. Although linear interpolation is easy to implement, it has limitations when the underlying data is complex and high-order, which is the case for most real-world data. Linear interpolation can output vertices at the wrong location. Its inability to deal with sharp features and features smaller than grid cells can create holes and broken pieces in the extracted isosurface. Despite these limitations, isosurface visualizations typically do not include insight into the spatial location and the magnitude of these errors. We utilize high-order interpolation methods with MC algorithms and interactive visualization to highlight these uncertainties. Our visualization tool helps identify the regions of high interpolation errors. It also allows users to query local areas for details and compare the differences between isosurfaces from different interpolation methods. In addition, we employ high-order methods to identify and reconstruct possible features that linear methods cannot detect. We showcase how our visualization tool helps explore and understand the extracted isosurface errors through synthetic and real-world data.","accessible_pdf":false,"authors":[{"affiliations":["Scientific Computing and Imaging Institute, Salk Lake City, United States"],"email":"touermi@sci.utah.edu","is_corresponding":true,"name":"Timbwaoga A. J. Ouermi"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Timbwaoga A. J. Ouermi"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1014","time_end":"","time_stamp":"","time_start":"","title":"Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods","uid":"w-uncertainty-1014","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Functional depth is a well-known technique used to derive descriptive statistics (e.g., median, quartiles, and outliers) for 1D data. Surface boxplots extend this concept to ensembles of images, helping scientists and users identify representative and outlier images. However, the computational time for surface boxplots increases cubically with the number of ensemble members, making it impractical for integration into visualization tools. In this paper, we propose a deep-learning solution for efficient depth prediction and computation of surface boxplots for time-varying ensemble data. Our deep learning framework accurately predicts member depths in a surface boxplot, achieving average speedups of 6X on a CPU and 15X on a GPU for the 2D Red Sea dataset with 50 ensemble members compared to the traditional depth computation algorithm. Our approach achieves at least a 99\\% level of rank preservation, with order flipping occurring only at pairs with extremely similar depth values that pose no statistical differences. This local flipping does not significantly impact the overall depth order of the ensemble members.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"mengjiao@sci.utah.edu","is_corresponding":true,"name":"Mengjiao Han"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":false,"name":"Jixian Li"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Mengjiao Han"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1015","time_end":"","time_stamp":"","time_start":"","title":"Accelerated Depth Computation for Surface Boxplots with Deep Learning","uid":"w-uncertainty-1015","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Wildfire poses substantial risks to our health, environment, and economy. Studying wildfire is challenging due to its complex inter- action with the atmosphere dynamics and the terrain. Researchers have employed ensemble simulations to study the relationship be- tween variables and mitigate uncertainties in unpredictable initial conditions. However, many domain scientists are unaware of the advanced visualization tools available for conveying uncertainty. To bring some uncertainty visualization techniques, we build an interactive visualization system that utilizes a band-depth-based method that provides a statistical summary and visualization for fire front contours from the ensemble. We augment the visualiza- tion system with capabilities to study wildfires as a dynamic system. In this paper, We demonstrate how our system can support domain scientists in studying fire spread patterns, identifying outlier simu- lations, and navigating to interesting instances based on a summary of events.","accessible_pdf":false,"authors":[{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"jixianli@sci.utah.edu","is_corresponding":true,"name":"Jixian Li"},{"affiliations":["Scientific Computing and Imaging Institute, Salk Lake City, United States"],"email":"touermi@sci.utah.edu","is_corresponding":false,"name":"Timbwaoga A. J. Ouermi"},{"affiliations":["University of Utah, Salt Lake City, United States"],"email":"crj@sci.utah.edu","is_corresponding":false,"name":"Chris R. Johnson"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jixian Li"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1016","time_end":"","time_stamp":"","time_start":"","title":"Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations","uid":"w-uncertainty-1016","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Uncertainty visualization is a key component in translating important insights from ensemble data into actionable decision-making by visually conveying various aspects of uncertainty within a system. With the recent advent of fast surrogate models for computationally expensive simulations, users can interact with more aspects of data spaces than ever before. However, the integration of ensemble data with surrogate models in a decision-making tool brings up new challenges for uncertainty visualization, namely how to reconcile and communicate the new and different types of uncertainties brought in by surrogates and how to utilize these new data estimates in actionable ways. In this work, we examine these issues as they relate to high-dimensional data visualization, the integration of discrete datasets and the continuous representations of those datasets, and the unique difficulties associated with systems that allow users to iterate between input and output spaces. We assess the role of uncertainty visualization in facilitating intuitive and actionable interaction with ensemble data and surrogate models, and highlight key challenges in this new frontier of computational simulation.","accessible_pdf":false,"authors":[{"affiliations":["National Renewable Energy Lab, Golden, United States"],"email":"sam.molnar@nrel.gov","is_corresponding":true,"name":"Sam Molnar"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"jd.laurencechasen@nrel.gov","is_corresponding":false,"name":"J.D. Laurence-Chasen"},{"affiliations":["The Ohio State University, Columbus, United States","National Renewable Energy Lab, Golden, United States"],"email":"duan.418@osu.edu","is_corresponding":false,"name":"Yuhan Duan"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"julie.bessac@nrel.gov","is_corresponding":false,"name":"Julie Bessac"},{"affiliations":["National Renewable Energy Laboratory, Golden, United States"],"email":"kristi.potter@nrel.gov","is_corresponding":false,"name":"Kristi Potter"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Sam Molnar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1017","time_end":"","time_stamp":"","time_start":"","title":"Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models","uid":"w-uncertainty-1017","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Although people frequently make decisions based on uncertain forecasts about future events, there is little guidance about how best to represent the uncertainty in forecasts. One common approach is to use multiple forecast visualizations, in which multiple forecasts are plotted on the same graph. This provides an implicit representation of the uncertainty in the data, but it is not clear how many forecasts to show, or how viewers might be influenced by seeing the more extreme forecasts rather than those closer to the mean. In this study, we showed participants forecasts of wind speed data and they made decisions based on their predictions about the future wind speed. We allowed participants to choose how many forecasts to view prior to making a decision, and we manipulated the ordering of the forecasts and the cost of each additional forecast. We found that participants viewed more forecasts when the outcome was more ambiguous. The order of the forecasts had little impact on their decisions when there was no cost for the additional information. However, when there was a cost for each forecast, the participants were much more likely to make a guess based on only the first forecast shown. In this case, showing one of the extreme forecasts first led to less optimal decisions.","accessible_pdf":false,"authors":[{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"lematze@sandia.gov","is_corresponding":true,"name":"Laura Matzen"},{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"mcstite@sandia.gov","is_corresponding":false,"name":"Mallory C Stites"},{"affiliations":["Sandia National Laboratories, Albuquerque, United States"],"email":"kmdivis@sandia.gov","is_corresponding":false,"name":"Kristin M Divis"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"abendeck3@gatech.edu","is_corresponding":false,"name":"Alexander Bendeck"},{"affiliations":["Georgia Institute of Technology, Atlanta, United States"],"email":"john.stasko@cc.gatech.edu","is_corresponding":false,"name":"John Stasko"},{"affiliations":["Northeastern University, Boston, United States"],"email":"l.padilla@northeastern.edu","is_corresponding":false,"name":"Lace M. Padilla"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Laura Matzen"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1018","time_end":"","time_stamp":"","time_start":"","title":"Effects of Forecast Number, Order, and Cost in Multiple Forecast Visualizations","uid":"w-uncertainty-1018","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"We present a simple comparative framework for testing and developing uncertainty modeling in uncertain marching cubes implementations. The selection of a model to represent the probability distribution of uncertain values directly influences the memory use, run time, and accuracy of an uncertainty visualization algorithm. We use an entropy calculation directly on ensemble data to establish an expected result and then compare the entropy from various probability models, including uniform, Gaussian, histogram, and quantile models. Our results verify that models matching the distribution of the ensemble indeed match the entropy. We further show that fewer bins in nonparametric histogram models are more effective whereas large numbers of bins in quantile models approach data accuracy.","accessible_pdf":false,"authors":[{"affiliations":["University of Illinois Urbana-Champaign, Urbana, United States"],"email":"sisneros@illinois.edu","is_corresponding":true,"name":"Robert Sisneros"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"tushar.athawale@gmail.com","is_corresponding":false,"name":"Tushar M. Athawale"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"kmorel@acm.org","is_corresponding":false,"name":"Kenneth Moreland"},{"affiliations":["Oak Ridge National Laboratory, Oak Ridge, United States"],"email":"pugmire@ornl.gov","is_corresponding":false,"name":"David Pugmire"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Robert Sisneros"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-uncertainty0","slot_id":"w-uncertainty-1019","time_end":"","time_stamp":"","time_start":"","title":"An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations","uid":"w-uncertainty-1019","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"Uncertainty Visualization","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"w-vis4climate":{"event":"Visualization for Climate Action and Sustainability","event_description":"","event_prefix":"w-vis4climate","event_type":"workshop","event_url":"","long_name":"Visualization for Climate Action and Sustainability","organizers":[],"sessions":[{"chair":[],"discord_category":"","discord_channel":"","discord_channel_id":"","discord_link":"","event_prefix":"w-vis4climate","ff_link":"","session_id":"w-vis4climate0","session_image":"w-vis4climate0.png","time_end":"","time_slots":[{"abstract":"re","accessible_pdf":false,"authors":[{"affiliations":["University of Toronto, Toronto, Canada"],"email":"fanny@dgp.toronto.edu","is_corresponding":true,"name":"Fanny Chevalier"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fanny Chevalier"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-vis4climate0","slot_id":"w-vis4climate-1000","time_end":"","time_stamp":"","time_start":"","title":"TEST - Le papier","uid":"w-vis4climate-1000","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Presenting the effects of and effective countermeasures for climate change is a significant challenge in science communication. Data-driven storytelling and narrative visualization can be part of the solution. However, the communication is limited when restricted to global or cross-regional scales, as climate effects are particular to the location and adaptions need to be local. In this work, we focus on data-driven storytelling that communicates local impacts of climate change. We analyze the adoption of data-driven storytelling by local news media in addressing climate-related topics. Further, we investigate the specific characteristics of the local scenario and present three application examples to showcase potential local data-driven stories. Since these examples are rooted in university teaching, we also discuss educational aspects. Finally, we summarize the interdisciplinary research challenges and opportunities for application associated with data-driven storytelling in a local context.","accessible_pdf":false,"authors":[{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"fabian.beck@uni-bamberg.de","is_corresponding":true,"name":"Fabian Beck"},{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"lukas.panzer@uni-bamberg.de","is_corresponding":false,"name":"Lukas Panzer"},{"affiliations":["University of Bamberg, Bamberg, Germany"],"email":"marc.redepenning@uni-bamberg.de","is_corresponding":false,"name":"Marc Redepenning"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Fabian Beck"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-vis4climate0","slot_id":"w-vis4climate-1008","time_end":"","time_stamp":"","time_start":"","title":"Local Climate Data Stories: Data-driven Storytelling to Communicate Effects and Mitigation of Climate Change in a Local Context","uid":"w-vis4climate-1008","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Climate change\u2019s global impact calls for coordinated visualization efforts to enhance collaboration and communication among key partners such as domain experts, community members, and policy makers. We present a collaborative initiative, EcoViz, where visualization practitioners and key partners co-designed environmental data visualizations to illustrate impacts on ecosystems and the benefit of informed management and nature-based solutions. Our three use cases rely on unique processing pipelines to represent time-dependent natural phenomena by combining cinematic, scientific, and information visualization methods. Scientific outputs are displayed through narrative data-driven animations, interactive geospatial web applications, and immersive Unreal Engine applications. Each field\u2019s decision-making process is specific, driving design decisions about the best representation and medium for each use case. Data-driven cinematic videos with simple charts and minimal annotations proved most effective for engaging large, diverse audiences. This flexible medium facilitates reuse, maintains critical details, and integrates well into broader narrative videos. The need for interdisciplinary visualizations highlights the importance of funding to integrate visualization practitioners throughout the scientific process to better translate data and knowledge into informed policy and practice.","accessible_pdf":false,"authors":[{"affiliations":["University of California, San Diego, San Diego, United States"],"email":"jkb@ucsc.edu","is_corresponding":true,"name":"Jessica Marielle Kendall-Bar"},{"affiliations":["University of California, San Diego, La Jolla, United States"],"email":"inealey@ucsd.edu","is_corresponding":false,"name":"Isaac Nealey"},{"affiliations":["University of California, Santa Cruz, Santa Cruz, United States"],"email":"icostell@ucsc.edu","is_corresponding":false,"name":"Ian Costello"},{"affiliations":["University of California, Santa Cruz, Santa Cruz, United States"],"email":"chlowrie@ucsc.edu","is_corresponding":false,"name":"Christopher Lowrie"},{"affiliations":["University of California, San Diego, San Diego, United States"],"email":"khn009@ucsd.edu","is_corresponding":false,"name":"Kevin Huynh Nguyen"},{"affiliations":["University of California San Diego, La Jolla, United States"],"email":"pponganis@ucsd.edu","is_corresponding":false,"name":"Paul J. Ponganis"},{"affiliations":["University of California, Santa Cruz, Santa Cruz, United States"],"email":"mwbeck@ucsc.edu","is_corresponding":false,"name":"Michael W. Beck"},{"affiliations":["University of California, San Diego, San Diego, United States"],"email":"ialtintas@ucsd.edu","is_corresponding":false,"name":"\u0130lkay Alt\u0131nta\u015f"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Jessica Marielle Kendall-Bar"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-vis4climate0","slot_id":"w-vis4climate-1011","time_end":"","time_stamp":"","time_start":"","title":"EcoViz: an iterative methodology for designing multifaceted data-driven environmental visualizations that communicate ecosystem impacts and envision nature-based solutions","uid":"w-vis4climate-1011","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Household consumption significantly impacts climate change. Yet designing interventions to encourage consumption reduction that are tailored to each home's needs remains challenging. To address this, we developed Eco-Garden, a data sculpture designed to visualise household consumption aiming to promote sustainable practices. Eco-Garden serves as both an aesthetic piece for visitors and a functional tool for household members to understand their resource consumption. In this paper, we present the human-centred design process of Eco-Garden and the preliminary findings we made through the field study. We conducted a field study with 15 households to explore participants' experience with Eco-Garden and its potential to encourage sustainable practices at home. Our participants provided positive feedback on integrating Eco-Garden into their homes, highlighting considerations such as aesthetics, physicality, calm manner of presenting consumption data. Our Insights contribute to developing data sculptures for households that can facilitate meaningful interactions with consumption data.","accessible_pdf":false,"authors":[{"affiliations":["Cardiff University, UK, Cardiff, United Kingdom"],"email":"pereraud@cardiff.ac.uk","is_corresponding":true,"name":"Dushani Ushettige"},{"affiliations":["Cardiff University, Cardiff, United Kingdom"],"email":"verdezotodiasn@cardiff.ac.uk","is_corresponding":false,"name":"Nervo Verdezoto"},{"affiliations":["Cardiff University, Cardiff, United Kingdom"],"email":"lannon@cardiff.ac.uk","is_corresponding":false,"name":"Simon Lannon"},{"affiliations":["Cardiff Universiy, Cardiff, United Kingdom"],"email":"gwilliamja@cardiff.ac.uk","is_corresponding":false,"name":"Jullie Gwilliam"},{"affiliations":["Cardiff University, Cardiff, United Kingdom"],"email":"eslambolchilarp@cardiff.ac.uk","is_corresponding":false,"name":"Parisa Eslambolchilar"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Dushani Ushettige"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-vis4climate0","slot_id":"w-vis4climate-1018","time_end":"","time_stamp":"","time_start":"","title":"Eco-Garden: A Data Sculpture to Encourage Sustainable Practices in Everyday Life in Households","uid":"w-vis4climate-1018","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"Consumers have the potential to play a large role in mitigating the climate crisis by taking on more pro-environmental behavior, for example by making more sustainable food choices. However, while environmental awareness is common among consumers, it is not always clear what the current impact of one's own food choices are, and consequently it is not always clear how or why their own behavior must change, or how important the change is. Immersive technologies have been shown to aid in these aspects. In this paper, we bring food production into the home by means of handheld augmented reality. Using the current prototype, users can input which ingredients are in their meal on their smartphone, and after making a 3D scan of their kitchen, plants, livestock, feed, and water required for all are visualized in front of them. In this paper, we describe the design of the current prototype and, by analyzing the current state of research on virtual and augmented reality for sustainability research, we describe in which ways the application could be extended in terms of data, models, and interaction, to investigate the most prominent issues within environmental sustainability communications research.","accessible_pdf":false,"authors":[{"affiliations":["Wageningen University and Research, Wageningen, Netherlands"],"email":"nina.rosa-dejong@wur.nl","is_corresponding":true,"name":"Nina Rosa"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Nina Rosa"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-vis4climate0","slot_id":"w-vis4climate-1023","time_end":"","time_stamp":"","time_start":"","title":"AwARe: Using handheld augmented reality for researching the potential of food resource information visualization","uid":"w-vis4climate-1023","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""},{"abstract":"This paper details the development and implementation of a collaborative exhibit at Boston\u2019s Museum of Science showcasing interactive data visualizations designed to educate the public on global sustainability and urban environmental concerns. Supported by cross-institutional collaboration, the exhibit provided a rich real-world learning opportunity for students, resulting in a set of public-facing educational resources that informed visitors of global sustainability concerns through the lens of a local municipality. The realization of this project was made possible only by a close collaboration between a municipality, science museum and academic partners, all who committed their expertise and resources at both leadership and implementation team levels.This initiative highlights the value of cross-institutional collaboration to ignite the transformative potential of interactive visualizations in driving public engagement of local and global sustainability issues. Focusing on promoting sustainability and enhancing community well-being, this initiative highlights the potential of cross-institutional collaboration and locally-relevant interactive data visualizations to educate, inspire action, and foster community engagement in addressing climate change and urban sustainability.","accessible_pdf":false,"authors":[{"affiliations":["Brown University, Providence, United States","Rhode Island School of Design, Providence, United States"],"email":"bae@brown.edu","is_corresponding":true,"name":"Beth Altringer Eagle"},{"affiliations":["Harvard University, Cambridge, United States"],"email":"sylvan@media.mit.edu","is_corresponding":false,"name":"Elisabeth Sylvan"}],"bunny_ff_link":"","bunny_ff_subtitles":"","bunny_prerecorded_link":"","bunny_prerecorded_subtitles":"","contributors":["Beth Altringer Eagle"],"doi":"","external_paper_link":"","fno":"","has_image":false,"has_pdf":false,"image_caption":"","keywords":[],"open_access":false,"paper_award":"","paper_type":"workshop","presentation_mode":"","session_id":"w-vis4climate0","slot_id":"w-vis4climate-1024","time_end":"","time_stamp":"","time_start":"","title":"Cultivating Climate Action Through Multi-Institutional Collaboration: Innovative Data Visualization Educational Programs and Exhibits for Public Engagement","uid":"w-vis4climate-1024","youtube_ff_id":"","youtube_ff_link":"","youtube_prerecorded_id":"","youtube_prerecorded_link":""}],"time_start":"","title":"Vis4Climate","track":"","zoom_broadcast_link":"","zoom_private_link":"","zoom_private_meeting":"","zoom_private_password":""}]},"w-visxai":{"event":"VISxAI: 7th Workshop on Visualization for AI Explainability","event_description":"","event_prefix":"w-visxai","event_type":"workshop","event_url":"","long_name":"VISxAI: 7th Workshop on Visualization for AI Explainability","organizers":[],"sessions":[]}} diff --git a/program/session_a-ldav0.html b/program/session_a-ldav0.html new file mode 100644 index 000000000..6d630bf20 --- /dev/null +++ b/program/session_a-ldav0.html @@ -0,0 +1,187 @@ + IEEE VIS 2024 Content: LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization: LDAV

LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization: LDAV

Room: To Be Announced


Efficient Analysis and Visualization of High-Resolution Computed Tomography Data for the Exploration of Enclosed Cuneiform Tablets

Authors: Stephan Olbrich, Andreas Beckert, Cécile Michel, Christian Schroer, Samaneh Ehteram, Andreas Schropp, Philipp Paetzold

Stephan Olbrich

Out-of-Core Dimensionality Reduction for Large Data via Out-of-Sample Extensions

Authors: Luca Marcel Reichmann, David Hägele, Daniel Weiskopf

David Hägele

Web-based Visualization and Analytics of Petascale data: Equity as a Tide that Lifts All Boats

Authors: Aashish Panta, Xuan Huang, Nina McCurdy, David Ellsworth, Amy Gooch, Giorgio Scorzelli, Hector Torres, Patrice Klein, Gustavo Ovando-Montejo, Valerio Pascucci

Aashish Panta

Distributed Path Compression for Piecewise Linear Morse-Smale Segmentations and Connected Components

Authors: Michael Will, Jonas Lukasczyk, Julien Tierny, Christoph Garth

Michael Will

Standardized Data-Parallel Rendering Using ANARI

Authors: Ingo Wald, Stefan Zellmann, Jefferson Amstutz, Qi Wu, Kevin Shawn Griffin, Milan Jaroš, Stefan Wesner

Stefan Zellmann

You may want to also jump to the parent event to see related presentations: LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_full0.html b/program/session_full0.html index 056d15c44..ebbc353a7 100644 --- a/program/session_full0.html +++ b/program/session_full0.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Full Papers

VIS Full Papers: Full Papers

Room: To Be Announced


Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment

Authors: Peilin Yu, Aida Nordman, Marta M. Koc-Januchta, Konrad J Schönborn, Lonni Besançon, Katerina Vrotsou

Peilin Yu

Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting

Authors: Anqi Cao, Xiao Xie, Runjin Zhang, Yuxin Tian, Mu Fan, Hui Zhang, Yingcai Wu

Anqi Cao

Visualizing Temporal Topic Embeddings with a Compass

Authors: Daniel Palamarchuk, Lemara Williams, Brian Mayer, Thomas Danielson, Rebecca Faust, Larry M Deschaine PhD, Chris North

Daniel Palamarchuk

Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts

Authors: Jianing Yin, Hanze Jia, Buwei Zhou, Tan Tang, Lu Ying, Shuainan Ye, Tai-Quan Peng, Yingcai Wu

Jianing Yin

DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer

Authors: Andrew Wentzel, Serageldin Attia, Xinhua Zhang, Guadalupe Canahuate, Clifton David Fuller, G. Elisabeta Marai

Andrew Wentzel

DeLVE into Earth’s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts

Authors: Mara Solen, Nigar Sultana, Laura A. Lukes, Tamara Munzner

Mara Solen

AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow

Authors: Dazhen Deng, Chuhan Zhang, Huawei Zheng, Yuwen Pu, Shouling Ji, Yingcai Wu

Dazhen Deng

Entanglements for Visualization: Changing Research Outcomes through Feminist Theory

Authors: Derya Akbaba, Lauren Klein, Miriah Meyer

Derya Akbaba

Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education

Authors: Lin Gao, Jing Lu, Zekai Shao, Ziyue Lin, Shengbin Yue, Chiokit Ieong, Yi Sun, Rory Zauner, Zhongyu Wei, Siming Chen

Lin Gao

Smartboard: Visual Exploration of Team Tactics with LLM Agent

Authors: Ziao Liu, Xiao Xie, Moqi He, Wenshuo Zhao, Yihong Wu, Liqi Cheng, Hui Zhang, Yingcai Wu

Ziao Liu

Causal Priors and Their Influence on Judgements of Causality in Visualized Data

Authors: Arran Zeyu Wang, David Borland, Tabitha C. Peck, Wenyuan Wang, David Gotz

Arran Zeyu Wang

PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets

Authors: Jaeyoung Kim, Sihyeon Lee, Hyeon Jeon, Keon-Joo Lee, Bohyoung Kim, HEE JOON, Jinwook Seo

Jaeyoung Kim

Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks

Authors: Bridger Herman, Cullen D. Jackson, Daniel F. Keefe

Bridger Herman

Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments

Authors: Angie Boggust, Venkatesh Sivaraman, Yannick Assogba, Donghao Ren, Dominik Moritz, Fred Hohman

Angie Boggust

CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments

Authors: Qian Zhu, Tao Lu, Shunan Guo, Xiaojuan Ma, Yalong Yang

Qian Zhu

SimpleSets: Capturing Categorical Point Patterns with Simple Shapes

Authors: Steven van den Broek, Wouter Meulemans, Bettina Speckmann

Steven van den Broek

Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.

Authors: Dylan Wootton, Amy Rae Fox, Evan Peck, Arvind Satyanarayan

Dylan Wootton

Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations

Authors: Xinhuan Shu, Alexis Pister, Junxiu Tang, Fanny Chevalier, Benjamin Bach

Xinhuan Shu

Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis

Authors: Mengyu Chen, Yijun Liu, Emily Wall

Mengyu Chen

ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance

Authors: Arpit Narechania, Kaustubh Odak, Mennatallah El-Assady, Alex Endert

Arpit Narechania

Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts

Authors: Nora Al-Naami, Nicolas Medoc, Matteo Magnani, Mohammad Ghoniem

Mohammad Ghoniem

Graph Transformer for Label Placement

Authors: Jingwei Qu, Pingshun Zhang, Enyu Che, Yinan Chen, Haibin Ling

Jingwei Qu

Aardvark: Composite Visualizations of Trees, Time-Series, and Images

Authors: Devin Lange, Robert L Judson-Torres, Thomas A Zangle, Alexander Lex

Devin Lange

Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks

Authors: Klaus Eckelt, Kiran Gadhave, Alexander Lex, Marc Streit

Klaus Eckelt

Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations

Authors: Ratanond Koonchanok, Michael E. Papka, Khairi Reda

Ratanond Koonchanok

UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization

Authors: Nikolaus Piccolotto, Markus Wallinger, Silvia Miksch, Markus Bögl

Nikolaus Piccolotto

PREVis: Perceived Readability Evaluation for Visualizations

Authors: Anne-Flore Cabouat, Tingying He, Petra Isenberg, Tobias Isenberg

Anne-Flore Cabouat

Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models

Authors: Tushar M. Athawale, Zhe Wang, David Pugmire, Kenneth Moreland, Qian Gong, Scott Klasky, Chris R. Johnson, Paul Rosen

Tushar M. Athawale

What Can Interactive Visualization do for Participatory Budgeting in Chicago?

Authors: Alex Kale, Danni Liu, Maria Gabriela Ayala, Harper Schwab, Andrew M McNutt

Alex Kale

The Effect of Visual Aids on Reading Numeric Data Tables

Authors: YongFeng Ji, Charles Perin, Miguel A Nacenta

Charles Perin

Mixing Linters with GUIs: A Color Palette Design Probe

Authors: Andrew M McNutt, Maureen Stone, Jeffrey Heer

Andrew M McNutt

A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space

Authors: Md Dilshadur Rahman, Ghulam Jilani Quadri, Bhavana Doppalapudi, Danielle Albers Szafir, Paul Rosen

Md Dilshadur Rahman

Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics

Authors: Gabriela Molina León, Anastasia Bezerianos, Olivier Gladin, Petra Isenberg

Gabriela Molina León

BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM

Authors: Andreas Walch, Attila Szabo, Harald Steinlechner, Thomas Ortner, Eduard Gröller, Johanna Schmidt

Johanna Schmidt

VMC: A Grammar for Visualizing Statistical Model Checks

Authors: Ziyang Guo, Alex Kale, Matthew Kay, Jessica Hullman

Ziyang Guo

The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling

Authors: Hana Pokojná, Tobias Isenberg, Stefan Bruckner, Barbora Kozlikova, Laura Garrison

Hana Pokojná

How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations

Authors: Leo Yu-Ho Lo, Huamin Qu

Leo Yu-Ho Lo

Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series

Authors: Songwen Hu, Ouxun Jiang, Jeffrey Riedmiller, Cindy Xiong Bearfield

Songwen Hu

LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models

Authors: Minsuk Kahng, Ian Tenney, Mahima Pushkarna, Michael Xieyang Liu, James Wexler, Emily Reif, Krystal Kallarackal, Minsuk Chang, Michael Terry, Lucas Dixon

Minsuk Kahng

StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions

Authors: Zixin Chen, Jiachen Wang, Meng Xia, Kento Shigyo, Dingdong Liu, Rong Zhang, Huamin Qu

Zixin Chen

VisEval: A Benchmark for Data Visualization in the Era of Large Language Models

Authors: Nan Chen, Yuge Zhang, Jiahang Xu, Kan Ren, Yuqing Yang

Nan Chen

Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks

Authors: Astrid van den Brandt, Sehi L'Yi, Huyen N. Nguyen, Anna Vilanova, Nils Gehlenborg

Astrid van den Brandt

Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video

Authors: Chunggi Lee, Tica Lin, Chen Zhu-Tian, Hanspeter Pfister

Chunggi Lee

FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data

Authors: Hongyan Li, Bo Yang, Yansong Chua

Hongyan Li

SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction

Authors: Haoran Jiang, Shaohan Shi, Shuhao Zhang, Jie Zheng, Quan Li

Haoran Jiang

Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts

Authors: Sarah Schöttler, Uta Hinrichs, Benjamin Bach

Sarah Schöttler

Discursive Patinas: Anchoring Discussions in Data Visualizations

Authors: Tobias Kauer, Derya Akbaba, Marian Dörk, Benjamin Bach

Tobias Kauer

D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding

Authors: Vaishali Dhanoa, Andreas Hinterreiter, Vanessa Fediuk, Niklas Elmqvist, Eduard Gröller, Marc Streit

Vaishali Dhanoa

Unveiling How Examples Shape Data Visualization Design Outcomes

Authors: Hannah K. Bako, Xinyi Liu, Grace Ko, Hyemi Song, Leilani Battle, Zhicheng Liu

Hannah K. Bako

Promises and Pitfalls: Using Large Language Models to Generate Visualization Items

Authors: Yuan Cui, Lily W. Ge, Yiren Ding, Lane Harrison, Fumeng Yang, Matthew Kay

Yuan Cui

DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs

Authors: Joohee Kim, Hyunwook Lee, Duc M. Nguyen, Minjeong Shin, Bum Chul Kwon, Sungahn Ko, Niklas Elmqvist

Joohee Kim

ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging

Authors: Guan Li, Yang Liu, Guihua Shan, Shiyu Cheng, Weiqun Cao, Junpeng Wang, Ko-Chih Wang

Guan Li

Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration

Authors: Jinrui Wang, Xinhuan Shu, Benjamin Bach, Uta Hinrichs

Jinrui Wang

User Experience of Visualizations in Motion: A Case Study and Design Considerations

Authors: Lijie Yao, Federica Bucchieri, Victoria McArthur, Anastasia Bezerianos, Petra Isenberg

Lijie Yao

A Practical Solver for Scalar Data Topological Simplification

Authors: Mohamed KISSI, Mathieu Pont, Joshua A Levine, Julien Tierny

Mohamed KISSI

DracoGPT: Extracting Visualization Design Preferences from Large Language Models

Authors: Huichen Will Wang, Mitchell L. Gordon, Leilani Battle, Jeffrey Heer

Huichen Will Wang

Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts

Authors: Sam Yu-Te Lee, Aryaman Bahukhandi, Dongyu Liu, Kwan-Liu Ma

Sam Yu-Te Lee

Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

Authors: Arvind Srinivasan, Johannes Ellemose, Peter W. S. Butcher, Panagiotis D. Ritsos, Niklas Elmqvist

Arvind Srinivasan

SpreadLine: Visualizing Egocentric Dynamic Influence

Authors: Yun-Hsin Kuo, Dongyu Liu, Kwan-Liu Ma

Yun-Hsin Kuo

The Backstory to “Swaying the Public”: A Design Chronicle of Election Forecast Visualizations

Authors: Fumeng Yang, Mandi Cai, Chloe Rose Mortenson, Hoda Fakhari, Ayse Deniz Lokmanoglu, Nicholas Diakopoulos, Erik Nisbet, Matthew Kay

Fumeng Yang

A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies

Authors: Trevor Manz, Fritz Lekschas, Evan Greene, Greg Finak, Nils Gehlenborg

Trevor Manz

Localized Evaluation for Constructing Discrete Vector Fields

Authors: Tanner Finken, Julien Tierny, Joshua A Levine

Tanner Finken

DataGarden: Formalizing Personal Sketches into Structured Visualization Templates

Authors: Anna Offenwanger, Theophanis Tsandilas, Fanny Chevalier

Anna Offenwanger

Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration

Authors: Youfu Yan, Yu Hou, Yongkang Xiao, Rui Zhang, Qianwen Wang

Qianwen Wang

Learnable and Expressive Visualization Authoring Through Blended Interfaces

Authors: Sehi L'Yi, Astrid van den Brandt, Etowah Adams, Huyen N. Nguyen, Nils Gehlenborg

Sehi L'Yi

When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech

Authors: Samuel Reinders, Matthew Butler, Ingrid Zukerman, Bongshin Lee, Lizhen Qu, Kim Marriott

Samuel Reinders

DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map

Authors: Deng Luo, Zainab Alsuwaykit, Dawar Khan, Ondřej Strnad, Tobias Isenberg, Ivan Viola

Deng Luo

How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts

Authors: Huichen Will Wang, Jane Hoffswell, Sao Myat Thazin Thane, Victor S. Bursztyn, Cindy Xiong Bearfield

Huichen Will Wang

Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots

Authors: Daniel Braun, Remco Chang, Michael Gleicher, Tatiana von Landesberger

Daniel Braun

DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic

Authors: Brian Montambault, Gabriel Appleby, Jen Rogers, Camelia D. Brumar, Mingwei Li, Remco Chang

Brian Montambault

Who Let the Guards Out: Visual Support for Patrolling Games

Authors: Matěj Lang, Adam Štěpánek, Róbert Zvara, Vojtěch Řehák, Barbora Kozlikova

Matěj Lang

Objective Lagrangian Vortex Cores and their Visual Representations

Authors: Tobias Günther, Holger Theisel

Tobias Günther

Dynamic Color Assignment for Hierarchical Data

Authors: Jiashu Chen, Weikai Yang, Zelin Jia, Lanxi Xiao, Shixia Liu

Jiashu Chen

Visual Support for the Loop Grafting Workflow on Proteins

Authors: Filip Opálený, Pavol Ulbrich, Joan Planas-Iglesias, Jan Byška, Jan Štourač, David Bednář, Katarína Furmanová, Barbora Kozlikova

Katarína Furmanová

ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map

Authors: Yilin Ye, Shishi Xiao, Xingchen Zeng, Wei Zeng

Yilin Ye

AdaMotif: Graph Simplification via Adaptive Motif Design

Authors: Hong Zhou, Peifeng Lai, Zhida Sun, Xiangyuan Chen, Yang Chen, Huisi Wu, Yong WANG

Hong Zhou

2D Embeddings of Multi-dimensional Partitionings

Authors: Marina Evers, Lars Linsen

Marina Evers

Path-based Design Model for Constructing and Exploring Alternative Visualisations

Authors: James R Jackson, Panagiotis D. Ritsos, Peter W. S. Butcher, Jonathan C Roberts

Jonathan C Roberts

Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data

Authors: Eric Mörth, Kevin Sidak, Zoltan Maliga, Torsten Möller, Nils Gehlenborg, Peter Sorger, Hanspeter Pfister, Johanna Beyer, Robert Krüger

Eric Mörth

SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality

Authors: Lixiang Zhao, Tobias Isenberg, Fuqi Xie, Hai-Ning Liang, Lingyun Yu

Lingyun Yu

TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees

Authors: Vitoria Guardieiro, Felipe Inagaki de Oliveira, Harish Doraiswamy, Luis Gustavo Nonato, Claudio Silva

Vitoria Guardieiro

The Impact of Vertical Scaling on Normal Probability Density Function Plots

Authors: Racquel Fygenson, Lace M. Padilla

Racquel Fygenson

A Multi-Level Task Framework for Event Sequence Analysis

Authors: Kazi Tasnim Zinat, Saimadhav Naga Sakhamuri, Aaron Sun Chen, Zhicheng Liu

Kazi Tasnim Zinat

CSLens: Towards Better Deploying Charging Stations via Visual Analytics —— A Coupled Networks Perspective

Authors: Yutian Zhang, Liwen Xu, Shaocong Tao, Quanxue Guan, Quan Li, Haipeng Zeng

Haipeng Zeng

Visual Analysis of Multi-outcome Causal Graphs

Authors: Mengjie Fan, Jinlu Yu, Daniel Weiskopf, Nan Cao, Huaiyu Wang, Liang Zhou

Mengjie Fan

Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context

Authors: Shaozhang Dai, Yi Li, Barrett Ens, Lonni Besançon, Tim Dwyer

Shaozhang Dai

Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration

Authors: Mingzhe Li, Hamish Carr, Oliver Rübel, Bei Wang, Gunther H Weber

Mingzhe Li

Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data

Authors: Atul Kumar, Siddharth Garg, Soumya Dutta

Soumya Dutta

Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts

Authors: Zhongsu Luo, Kai Xiong, Jiajun Zhu, Ran Chen, Xinhuan Shu, Di Weng, Yingcai Wu

Zhongsu Luo

What University Students Learn In Visualization Classes

Authors: Maryam Hedayati, Matthew Kay

Maryam Hedayati

Structure-Aware Simplification for Hypergraph Visualization

Authors: Peter D Oliver, Eugene Zhang, Yue Zhang

Eugene Zhang

A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

Authors: Daniel Atzberger, Tim Cech, Willy Scheibel, Jürgen Döllner, Michael Behrisch, Tobias Schreck

Daniel Atzberger

MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors

Authors: Yuxiao Li, Xin Liang, Bei Wang, Yongfeng Qiu, Lin Yan, Hanqi Guo

Yuxiao Li

Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing

Authors: Weiran Lyu, Raghavendra Sridharamurthy, Jeff M. Phillips, Bei Wang

Raghavendra Sridharamurthy

Interactive Design-of-Experiments: Optimizing a Cooling System

Authors: Rainer Splechtna, Majid Behravan, Mario Jelovic, Denis Gracanin, Helwig Hauser, Kresimir Matkovic

Kresimir Matkovic

Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations

Authors: Johannes Fuchs, Alexander Frings, Maria-Viktoria Heinle, Daniel Keim, Sara Di Bartolomeo

Johannes Fuchs

Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics

Authors: Gustavo Moreira, Maryam Hosseini, Carolina Veiga Ferreira de Souza, Lucas Alexandre, Nicola Colaninno, Daniel de Oliveira, Nivan Ferreira, Marcos Lage, Fabio Miranda

Fabio Miranda

HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data

Authors: Guozheng Li, haotian mi, Chi Harold Liu, Takayuki Itoh, Guoren Wang

Guozheng Li

HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems

Authors: Sonia Castelo Quispe, João Rulff, Parikshit Solunke, Erin McGowan, Guande Wu, Iran Roman, Roque Lopez, Bea Steers, Qi Sun, Juan Pablo Bello, Bradley S Feest, Michael Middleton, Ryan McKendrick, Claudio Silva

Sonia Castelo Quispe

An Empirically Grounded Approach for Designing Shape Palettes

Authors: Chin Tseng, Arran Zeyu Wang, Ghulam Jilani Quadri, Danielle Albers Szafir

Chin Tseng

DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study

Authors: Alexander Wyss, Gabriela Morgenshtern, Amanda Hirsch-Hüsler, Jürgen Bernard

Alexander Wyss

Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network

Authors: Tianyu Xiong, Skylar Wolfgang Wurster, Hanqi Guo, Tom Peterka, Han-Wei Shen

Tianyu Xiong

Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings

Authors: Connor Wilson, Eduardo Puerta, Tarik Crnovrsanin, Sara Di Bartolomeo, Cody Dunne

Connor Wilson

Rapid and Precise Topological Comparison with Merge Tree Neural Networks

Authors: Yu Qin, Brittany Terese Fasy, Carola Wenk, Brian Summa

Yu Qin

Towards Enhancing Low Vision Usability of Data Charts on Smartphones

Authors: Yash Prakash, Pathan Aseef Khan, Akshay Kolgar Nayak, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok

Yash Prakash

You may want to also jump to the parent event to see related presentations: VIS Full Papers

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

IEEE VIS 2024 Content: VIS Full Papers: Full Papers

VIS Full Papers: Full Papers

Room: To Be Announced


Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment

Authors: Peilin Yu, Aida Nordman, Marta M. Koc-Januchta, Konrad J Schönborn, Lonni Besançon, Katerina Vrotsou

Peilin Yu

Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting

Authors: Anqi Cao, Xiao Xie, Runjin Zhang, Yuxin Tian, Mu Fan, Hui Zhang, Yingcai Wu

Anqi Cao

Visualizing Temporal Topic Embeddings with a Compass

Authors: Daniel Palamarchuk, Lemara Williams, Brian Mayer, Thomas Danielson, Rebecca Faust, Larry M Deschaine PhD, Chris North

Daniel Palamarchuk

Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts

Authors: Jianing Yin, Hanze Jia, Buwei Zhou, Tan Tang, Lu Ying, Shuainan Ye, Tai-Quan Peng, Yingcai Wu

Jianing Yin

DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer

Authors: Andrew Wentzel, Serageldin Attia, Xinhua Zhang, Guadalupe Canahuate, Clifton David Fuller, G. Elisabeta Marai

Andrew Wentzel

DeLVE into Earth’s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts

Authors: Mara Solen, Nigar Sultana, Laura A. Lukes, Tamara Munzner

Mara Solen

AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow

Authors: Dazhen Deng, Chuhan Zhang, Huawei Zheng, Yuwen Pu, Shouling Ji, Yingcai Wu

Dazhen Deng

Entanglements for Visualization: Changing Research Outcomes through Feminist Theory

Authors: Derya Akbaba, Lauren Klein, Miriah Meyer

Derya Akbaba

Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education

Authors: Lin Gao, Jing Lu, Zekai Shao, Ziyue Lin, Shengbin Yue, Chiokit Ieong, Yi Sun, Rory Zauner, Zhongyu Wei, Siming Chen

Lin Gao

Smartboard: Visual Exploration of Team Tactics with LLM Agent

Authors: Ziao Liu, Xiao Xie, Moqi He, Wenshuo Zhao, Yihong Wu, Liqi Cheng, Hui Zhang, Yingcai Wu

Ziao Liu

Causal Priors and Their Influence on Judgements of Causality in Visualized Data

Authors: Arran Zeyu Wang, David Borland, Tabitha C. Peck, Wenyuan Wang, David Gotz

Arran Zeyu Wang

PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets

Authors: Jaeyoung Kim, Sihyeon Lee, Hyeon Jeon, Keon-Joo Lee, Bohyoung Kim, HEE JOON, Jinwook Seo

Jaeyoung Kim

Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks

Authors: Bridger Herman, Cullen D. Jackson, Daniel F. Keefe

Bridger Herman

Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments

Authors: Angie Boggust, Venkatesh Sivaraman, Yannick Assogba, Donghao Ren, Dominik Moritz, Fred Hohman

Angie Boggust

CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments

Authors: Qian Zhu, Tao Lu, Shunan Guo, Xiaojuan Ma, Yalong Yang

Qian Zhu

SimpleSets: Capturing Categorical Point Patterns with Simple Shapes

Authors: Steven van den Broek, Wouter Meulemans, Bettina Speckmann

Steven van den Broek

Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.

Authors: Dylan Wootton, Amy Rae Fox, Evan Peck, Arvind Satyanarayan

Dylan Wootton

Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations

Authors: Xinhuan Shu, Alexis Pister, Junxiu Tang, Fanny Chevalier, Benjamin Bach

Xinhuan Shu

ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance

Authors: Arpit Narechania, Kaustubh Odak, Mennatallah El-Assady, Alex Endert

Arpit Narechania

Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts

Authors: Nora Al-Naami, Nicolas Medoc, Matteo Magnani, Mohammad Ghoniem

Mohammad Ghoniem

Graph Transformer for Label Placement

Authors: Jingwei Qu, Pingshun Zhang, Enyu Che, Yinan Chen, Haibin Ling

Jingwei Qu

Aardvark: Composite Visualizations of Trees, Time-Series, and Images

Authors: Devin Lange, Robert L Judson-Torres, Thomas A Zangle, Alexander Lex

Devin Lange

Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks

Authors: Klaus Eckelt, Kiran Gadhave, Alexander Lex, Marc Streit

Klaus Eckelt

Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations

Authors: Ratanond Koonchanok, Michael E. Papka, Khairi Reda

Ratanond Koonchanok

UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization

Authors: Nikolaus Piccolotto, Markus Wallinger, Silvia Miksch, Markus Bögl

Nikolaus Piccolotto

PREVis: Perceived Readability Evaluation for Visualizations

Authors: Anne-Flore Cabouat, Tingying He, Petra Isenberg, Tobias Isenberg

Anne-Flore Cabouat

Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models

Authors: Tushar M. Athawale, Zhe Wang, David Pugmire, Kenneth Moreland, Qian Gong, Scott Klasky, Chris R. Johnson, Paul Rosen

Tushar M. Athawale

What Can Interactive Visualization do for Participatory Budgeting in Chicago?

Authors: Alex Kale, Danni Liu, Maria Gabriela Ayala, Harper Schwab, Andrew M McNutt

Alex Kale

The Effect of Visual Aids on Reading Numeric Data Tables

Authors: YongFeng Ji, Charles Perin, Miguel A Nacenta

Charles Perin

Mixing Linters with GUIs: A Color Palette Design Probe

Authors: Andrew M McNutt, Maureen Stone, Jeffrey Heer

Andrew M McNutt

A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space

Authors: Md Dilshadur Rahman, Ghulam Jilani Quadri, Bhavana Doppalapudi, Danielle Albers Szafir, Paul Rosen

Md Dilshadur Rahman

Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics

Authors: Gabriela Molina León, Anastasia Bezerianos, Olivier Gladin, Petra Isenberg

Gabriela Molina León

BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM

Authors: Andreas Walch, Attila Szabo, Harald Steinlechner, Thomas Ortner, Eduard Gröller, Johanna Schmidt

Johanna Schmidt

VMC: A Grammar for Visualizing Statistical Model Checks

Authors: Ziyang Guo, Alex Kale, Matthew Kay, Jessica Hullman

Ziyang Guo

The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling

Authors: Hana Pokojná, Tobias Isenberg, Stefan Bruckner, Barbora Kozlikova, Laura Garrison

Hana Pokojná

How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations

Authors: Leo Yu-Ho Lo, Huamin Qu

Leo Yu-Ho Lo

Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series

Authors: Songwen Hu, Ouxun Jiang, Jeffrey Riedmiller, Cindy Xiong Bearfield

Songwen Hu

LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models

Authors: Minsuk Kahng, Ian Tenney, Mahima Pushkarna, Michael Xieyang Liu, James Wexler, Emily Reif, Krystal Kallarackal, Minsuk Chang, Michael Terry, Lucas Dixon

Minsuk Kahng

StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions

Authors: Zixin Chen, Jiachen Wang, Meng Xia, Kento Shigyo, Dingdong Liu, Rong Zhang, Huamin Qu

Zixin Chen

VisEval: A Benchmark for Data Visualization in the Era of Large Language Models

Authors: Nan Chen, Yuge Zhang, Jiahang Xu, Kan Ren, Yuqing Yang

Nan Chen

Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks

Authors: Astrid van den Brandt, Sehi L'Yi, Huyen N. Nguyen, Anna Vilanova, Nils Gehlenborg

Astrid van den Brandt

Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video

Authors: Chunggi Lee, Tica Lin, Chen Zhu-Tian, Hanspeter Pfister

Chunggi Lee

FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data

Authors: Hongyan Li, Bo Yang, Yansong Chua

Hongyan Li

SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction

Authors: Haoran Jiang, Shaohan Shi, Shuhao Zhang, Jie Zheng, Quan Li

Haoran Jiang

Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts

Authors: Sarah Schöttler, Uta Hinrichs, Benjamin Bach

Sarah Schöttler

Discursive Patinas: Anchoring Discussions in Data Visualizations

Authors: Tobias Kauer, Derya Akbaba, Marian Dörk, Benjamin Bach

Tobias Kauer

D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding

Authors: Vaishali Dhanoa, Andreas Hinterreiter, Vanessa Fediuk, Niklas Elmqvist, Eduard Gröller, Marc Streit

Vaishali Dhanoa

Unveiling How Examples Shape Data Visualization Design Outcomes

Authors: Hannah K. Bako, Xinyi Liu, Grace Ko, Hyemi Song, Leilani Battle, Zhicheng Liu

Hannah K. Bako

Promises and Pitfalls: Using Large Language Models to Generate Visualization Items

Authors: Yuan Cui, Lily W. Ge, Yiren Ding, Lane Harrison, Fumeng Yang, Matthew Kay

Yuan Cui

DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs

Authors: Joohee Kim, Hyunwook Lee, Duc M. Nguyen, Minjeong Shin, Bum Chul Kwon, Sungahn Ko, Niklas Elmqvist

Joohee Kim

ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging

Authors: Guan Li, Yang Liu, Guihua Shan, Shiyu Cheng, Weiqun Cao, Junpeng Wang, Ko-Chih Wang

Guan Li

Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration

Authors: Jinrui Wang, Xinhuan Shu, Benjamin Bach, Uta Hinrichs

Jinrui Wang

User Experience of Visualizations in Motion: A Case Study and Design Considerations

Authors: Lijie Yao, Federica Bucchieri, Victoria McArthur, Anastasia Bezerianos, Petra Isenberg

Lijie Yao

A Practical Solver for Scalar Data Topological Simplification

Authors: Mohamed KISSI, Mathieu Pont, Joshua A Levine, Julien Tierny

Mohamed KISSI

DracoGPT: Extracting Visualization Design Preferences from Large Language Models

Authors: Huichen Will Wang, Mitchell L. Gordon, Leilani Battle, Jeffrey Heer

Huichen Will Wang

Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts

Authors: Sam Yu-Te Lee, Aryaman Bahukhandi, Dongyu Liu, Kwan-Liu Ma

Sam Yu-Te Lee

Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

Authors: Arvind Srinivasan, Johannes Ellemose, Peter W. S. Butcher, Panagiotis D. Ritsos, Niklas Elmqvist

Arvind Srinivasan

SpreadLine: Visualizing Egocentric Dynamic Influence

Authors: Yun-Hsin Kuo, Dongyu Liu, Kwan-Liu Ma

Yun-Hsin Kuo

The Backstory to “Swaying the Public”: A Design Chronicle of Election Forecast Visualizations

Authors: Fumeng Yang, Mandi Cai, Chloe Rose Mortenson, Hoda Fakhari, Ayse Deniz Lokmanoglu, Nicholas Diakopoulos, Erik Nisbet, Matthew Kay

Fumeng Yang

A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies

Authors: Trevor Manz, Fritz Lekschas, Evan Greene, Greg Finak, Nils Gehlenborg

Trevor Manz

Localized Evaluation for Constructing Discrete Vector Fields

Authors: Tanner Finken, Julien Tierny, Joshua A Levine

Tanner Finken

DataGarden: Formalizing Personal Sketches into Structured Visualization Templates

Authors: Anna Offenwanger, Theophanis Tsandilas, Fanny Chevalier

Anna Offenwanger

Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration

Authors: Youfu Yan, Yu Hou, Yongkang Xiao, Rui Zhang, Qianwen Wang

Qianwen Wang

Learnable and Expressive Visualization Authoring Through Blended Interfaces

Authors: Sehi L'Yi, Astrid van den Brandt, Etowah Adams, Huyen N. Nguyen, Nils Gehlenborg

Sehi L'Yi

When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech

Authors: Samuel Reinders, Matthew Butler, Ingrid Zukerman, Bongshin Lee, Lizhen Qu, Kim Marriott

Samuel Reinders

DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map

Authors: Deng Luo, Zainab Alsuwaykit, Dawar Khan, Ondřej Strnad, Tobias Isenberg, Ivan Viola

Deng Luo

How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts

Authors: Huichen Will Wang, Jane Hoffswell, Sao Myat Thazin Thane, Victor S. Bursztyn, Cindy Xiong Bearfield

Huichen Will Wang

Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots

Authors: Daniel Braun, Remco Chang, Michael Gleicher, Tatiana von Landesberger

Daniel Braun

DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic

Authors: Brian Montambault, Gabriel Appleby, Jen Rogers, Camelia D. Brumar, Mingwei Li, Remco Chang

Brian Montambault

Who Let the Guards Out: Visual Support for Patrolling Games

Authors: Matěj Lang, Adam Štěpánek, Róbert Zvara, Vojtěch Řehák, Barbora Kozlikova

Matěj Lang

Objective Lagrangian Vortex Cores and their Visual Representations

Authors: Tobias Günther, Holger Theisel

Tobias Günther

Dynamic Color Assignment for Hierarchical Data

Authors: Jiashu Chen, Weikai Yang, Zelin Jia, Lanxi Xiao, Shixia Liu

Jiashu Chen

Visual Support for the Loop Grafting Workflow on Proteins

Authors: Filip Opálený, Pavol Ulbrich, Joan Planas-Iglesias, Jan Byška, Jan Štourač, David Bednář, Katarína Furmanová, Barbora Kozlikova

Katarína Furmanová

ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map

Authors: Yilin Ye, Shishi Xiao, Xingchen Zeng, Wei Zeng

Yilin Ye

AdaMotif: Graph Simplification via Adaptive Motif Design

Authors: Hong Zhou, Peifeng Lai, Zhida Sun, Xiangyuan Chen, Yang Chen, Huisi Wu, Yong WANG

Hong Zhou

2D Embeddings of Multi-dimensional Partitionings

Authors: Marina Evers, Lars Linsen

Marina Evers

Path-based Design Model for Constructing and Exploring Alternative Visualisations

Authors: James R Jackson, Panagiotis D. Ritsos, Peter W. S. Butcher, Jonathan C Roberts

Jonathan C Roberts

Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data

Authors: Eric Mörth, Kevin Sidak, Zoltan Maliga, Torsten Möller, Nils Gehlenborg, Peter Sorger, Hanspeter Pfister, Johanna Beyer, Robert Krüger

Eric Mörth

SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality

Authors: Lixiang Zhao, Tobias Isenberg, Fuqi Xie, Hai-Ning Liang, Lingyun Yu

Lingyun Yu

TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees

Authors: Vitoria Guardieiro, Felipe Inagaki de Oliveira, Harish Doraiswamy, Luis Gustavo Nonato, Claudio Silva

Vitoria Guardieiro

The Impact of Vertical Scaling on Normal Probability Density Function Plots

Authors: Racquel Fygenson, Lace M. Padilla

Racquel Fygenson

A Multi-Level Task Framework for Event Sequence Analysis

Authors: Kazi Tasnim Zinat, Saimadhav Naga Sakhamuri, Aaron Sun Chen, Zhicheng Liu

Kazi Tasnim Zinat

CSLens: Towards Better Deploying Charging Stations via Visual Analytics —— A Coupled Networks Perspective

Authors: Yutian Zhang, Liwen Xu, Shaocong Tao, Quanxue Guan, Quan Li, Haipeng Zeng

Haipeng Zeng

Visual Analysis of Multi-outcome Causal Graphs

Authors: Mengjie Fan, Jinlu Yu, Daniel Weiskopf, Nan Cao, Huaiyu Wang, Liang Zhou

Mengjie Fan

Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context

Authors: Shaozhang Dai, Yi Li, Barrett Ens, Lonni Besançon, Tim Dwyer

Shaozhang Dai

Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration

Authors: Mingzhe Li, Hamish Carr, Oliver Rübel, Bei Wang, Gunther H Weber

Mingzhe Li

Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data

Authors: Atul Kumar, Siddharth Garg, Soumya Dutta

Soumya Dutta

Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts

Authors: Zhongsu Luo, Kai Xiong, Jiajun Zhu, Ran Chen, Xinhuan Shu, Di Weng, Yingcai Wu

Zhongsu Luo

What University Students Learn In Visualization Classes

Authors: Maryam Hedayati, Matthew Kay

Maryam Hedayati

Structure-Aware Simplification for Hypergraph Visualization

Authors: Peter D Oliver, Eugene Zhang, Yue Zhang

Eugene Zhang

A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

Authors: Daniel Atzberger, Tim Cech, Willy Scheibel, Jürgen Döllner, Michael Behrisch, Tobias Schreck

Daniel Atzberger

MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors

Authors: Yuxiao Li, Xin Liang, Bei Wang, Yongfeng Qiu, Lin Yan, Hanqi Guo

Yuxiao Li

Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing

Authors: Weiran Lyu, Raghavendra Sridharamurthy, Jeff M. Phillips, Bei Wang

Raghavendra Sridharamurthy

Interactive Design-of-Experiments: Optimizing a Cooling System

Authors: Rainer Splechtna, Majid Behravan, Mario Jelovic, Denis Gracanin, Helwig Hauser, Kresimir Matkovic

Kresimir Matkovic

Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations

Authors: Johannes Fuchs, Alexander Frings, Maria-Viktoria Heinle, Daniel Keim, Sara Di Bartolomeo

Johannes Fuchs

Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics

Authors: Gustavo Moreira, Maryam Hosseini, Carolina Veiga Ferreira de Souza, Lucas Alexandre, Nicola Colaninno, Daniel de Oliveira, Nivan Ferreira, Marcos Lage, Fabio Miranda

Fabio Miranda

HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data

Authors: Guozheng Li, haotian mi, Chi Harold Liu, Takayuki Itoh, Guoren Wang

Guozheng Li

HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems

Authors: Sonia Castelo Quispe, João Rulff, Parikshit Solunke, Erin McGowan, Guande Wu, Iran Roman, Roque Lopez, Bea Steers, Qi Sun, Juan Pablo Bello, Bradley S Feest, Michael Middleton, Ryan McKendrick, Claudio Silva

Sonia Castelo Quispe

An Empirically Grounded Approach for Designing Shape Palettes

Authors: Chin Tseng, Arran Zeyu Wang, Ghulam Jilani Quadri, Danielle Albers Szafir

Chin Tseng

DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study

Authors: Alexander Wyss, Gabriela Morgenshtern, Amanda Hirsch-Hüsler, Jürgen Bernard

Alexander Wyss

Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network

Authors: Tianyu Xiong, Skylar Wolfgang Wurster, Hanqi Guo, Tom Peterka, Han-Wei Shen

Tianyu Xiong

Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings

Authors: Connor Wilson, Eduardo Puerta, Tarik Crnovrsanin, Sara Di Bartolomeo, Cody Dunne

Connor Wilson

Rapid and Precise Topological Comparison with Merge Tree Neural Networks

Authors: Yu Qin, Brittany Terese Fasy, Carola Wenk, Brian Summa

Yu Qin

Towards Enhancing Low Vision Usability of Data Charts on Smartphones

Authors: Yash Prakash, Pathan Aseef Khan, Akshay Kolgar Nayak, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok

Yash Prakash

You may want to also jump to the parent event to see related presentations: VIS Full Papers

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

IEEE VIS 2024 Content: VDS: Visualization in Data Science Symposium: VDS

VDS: Visualization in Data Science Symposium: VDS

Room: To Be Announced


Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent

Authors: Yannick Metz, Dennis Ackermann, Daniel Keim, Maximilian T. Fischer

Maximilian T. Fischer

Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop

Authors: Jen Rogers, Mehdi Chakhchoukh, Marie Anastacio, Rebecca Faust, Cagatay Turkay, Lars Kotthoff, Steffen Koch, Andreas Kerren, Jürgen Bernard

Jen Rogers

The Categorical Data Map: A Multidimensional Scaling-Based Approach

Authors: Frederik L. Dennig, Lucas Joos, Patrick Paetzold, Daniela Blumberg, Oliver Deussen, Daniel Keim, Maximilian T. Fischer

Frederik L. Dennig

Towards a Visual Perception-Based Analysis of Clustering Quality Metrics

Authors: Graziano Blasilli, Daniel Kerrigan, Enrico Bertini, Giuseppe Santucci

Graziano Blasilli

Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Authors: Yongsu Ahn, Quinn K Wolter, Jonilyn Dick, Janet Dick, Yu-Ru Lin

Yongsu Ahn

Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs

Authors: Raphael Buchmüller, Friederike Körte, Daniel Keim

Raphael Buchmüller

You may want to also jump to the parent event to see related presentations: VDS: Visualization in Data Science Symposium

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_short0.html b/program/session_short0.html index 030989d00..fa076717d 100644 --- a/program/session_short0.html +++ b/program/session_short0.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Short Papers: Short Papers

VIS Short Papers: Short Papers

Room: To Be Announced


Data Guards: Challenges and Solutions for Fostering Trust in Data

Authors: Nicole Sultanum, Dennis Bromley, Michael Correll

Nicole Sultanum

Intuitive Design of Deep Learning Models through Visual Feedback

Authors: JunYoung Choi, Sohee Park, GaYeon Koh, Youngseo Kim, Won-Ki Jeong

JunYoung Choi

A Comparative Study of Neural Surface Reconstruction for Scientific Visualization

Authors: Siyuan Yao, Weixi Song, Chaoli Wang

Siyuan Yao

Accelerating Transfer Function Update for Distance Map based Volume Rendering

Authors: Michael Rauter, Lukas Zimmermann PhD, Markus Zeilinger PhD

Michael Rauter

FCNR: Fast Compressive Neural Representation of Visualization Images

Authors: Yunfei Lu, Pengfei Gu, Chaoli Wang

Yunfei Lu

On Combined Visual Cluster and Set Analysis

Authors: Nikolaus Piccolotto, Markus Wallinger, Silvia Miksch, Markus Bögl

Nikolaus Piccolotto

ImageSI: Semantic Interaction for Deep Learning Image Projections

Authors: Jiayue Lin, Rebecca Faust, Chris North

Rebecca Faust

A Literature-based Visualization Task Taxonomy for Gantt charts

Authors: Sayef Azad Sakin, Katherine E. Isaacs

Sayef Azad Sakin

Integrating Annotations into the Design Process for Sonifications and Physicalizations

Authors: Rhys Sorenson-Graff, S. Sandra Bae, Jordan Wirfs-Brock

S. Sandra Bae

GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction

Authors: Myeongwon Jung, Takanori Fujiwara, Jaemin Jo

Myeongwon Jung

DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations

Authors: Dennis Bromley, Vidya Setlur

Dennis Bromley

Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness

Authors: Soohyun Lee, Minsuk Chang, Seokhyeon Park, Jinwook Seo

Soohyun Lee

Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content

Authors: Muna Alebri, Noëlle Rakotondravony, Lane Harrison

Muna Alebri

AEye: A Visualization Tool for Image Datasets

Authors: Florian Grötschla, Luca A Lanzendörfer, Marco Calzavara, Roger Wattenhofer

Florian Grötschla

Gridlines Mitigate Sine Illusion in Line Charts

Authors: Clayton J Knittel, Jane Awuah, Steven L Franconeri, Cindy Xiong Bearfield

Cindy Xiong Bearfield

A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling

Authors: Yang Ouyang, Chenyang Zhang, He Wang, Tianle Ma, Chang Jiang, Yuheng Yan, Zuoqin Yan, Xiaojuan Ma, Chuhan Shi, Quan Li

Yang Ouyang

Hypertrix: An indicatrix for high-dimensional visualizations

Authors: Shivam Raval, Fernanda Viegas, Martin Wattenberg

Shivam Raval

Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views

Authors: Mark S Keller, Trevor Manz, Nils Gehlenborg

Mark S Keller

Groot: An Interface for Editing and Configuring Automated Data Insights

Authors: Sneha Gathani, Anamaria Crisan, Vidya Setlur, Arjun Srinivasan

Sneha Gathani

ConFides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration

Authors: Sunwoo Ha, Chaehun Lim, R. Jordan Crouser, Alvitta Ottley

Sunwoo Ha

Connections Beyond Data: Exploring Homophily With Visualizations

Authors: Poorna Talkad Sukumar, Maurizio Porfiri, Oded Nov

Poorna Talkad Sukumar

The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations

Authors: Magdalena Boucher, Christina Stoiber, Mandy Keck, Victor Adriel de Jesus Oliveira, Wolfgang Aigner

Magdalena Boucher

Science in a Blink: Supporting Ensemble Perception in Scalar Fields

Authors: Victor A. Mateevitsi, Michael E. Papka, Khairi Reda

Khairi Reda

AltGeoViz: Facilitating Accessible Geovisualization

Authors: Chu Li, Rock Yuren Pang, Ather Sharif, Arnavi Chheda-Kothary, Jeffrey Heer, Jon E. Froehlich

Chu Li

Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function

Authors: Tomas Rodolfo Daetz Chacon, Michael Böttinger, Gerik Scheuermann, Christian Heine

Tomas Rodolfo Daetz Chacon

Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes

Authors: Patrick Mackey, Jacob Miller, Liz Faultersack

Patrick Mackey

Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization

Authors: Camilla Hrycak, David Lewakis, Jens Harald Krueger

Camilla Hrycak

DaVE - A Curated Database of Visualization Examples

Authors: Jens Koenen, Marvin Petersen, Christoph Garth, Tim Gerrits

Jens Koenen

Feature Clock: High-Dimensional Effects in Two-Dimensional Plots

Authors: Olga Ovcharenko, Rita Sevastjanova, Valentina Boeva

Olga Ovcharenko

Opening the black box of 3D reconstruction error analysis with VECTOR

Authors: Racquel Fygenson, Kazi Jawad, Zongzhan Li, Francois Ayoub, Robert G Deen, Scott Davidoff, Dominik Moritz, Mauricio Hess-Flores

Mauricio Hess-Flores

Visualizations on Smart Watches while Running: It Actually Helps!

Authors: Sarina Kashanj, Xiyao Wang, Charles Perin

Charles Perin

PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis

Authors: Yue Yu, Leixian Shen, Fei Long, Huamin Qu, Hao Chen

Yue Yu

Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality

Authors: Hojung Kwon, Yuanbo Li, Xiaohan Ye, Praccho Muna-McQuay, Liuren Yin, James Tompkin

James Tompkin

An Overview + Detail Layout for Visualizing Compound Graphs

Authors: Chang Han, Justin Lieffers, Clayton Morrison, Katherine E. Isaacs

Chang Han

Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking

Authors: Fairouz Grioui, Tanja Blascheck, Lijie Yao, Petra Isenberg

Fairouz Grioui

Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities

Authors: Matthias Maiterth, Wes Brewer, Dane De Wet, Scott Greenwood, Vineet Kumar, Jesse Hines, Sedrick L Bouknight, Zhe Wang, Tim Dykes, Feiyi Wang

Matthias Maiterth

Curve Segment Neighborhood-based Vector Field Exploration

Authors: Nguyen K Phan, Guoning Chen

Nguyen K Phan

Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations

Authors: Venkatesh Sivaraman, Frank Elavsky, Dominik Moritz, Adam Perer

Venkatesh Sivaraman

Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations

Authors: Kiroong Choe, Eunhye Kim, Sangwon Park, Jinwook Seo

Kiroong Choe

Can GPT-4V Detect Misleading Visualizations?

Authors: Jason Huang Alexander, Priyal H Nanda, Kai-Cheng Yang, Ali Sarvghad

Ali Sarvghad

A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts

Authors: Anne Gossing, Andreas Beckert, Christoph Fischer, Nicolas Klenert, Vijay Natarajan, George Pacey, Thorwin Vogt, Marc Rautenhaus, Daniel Baum

Anne Gossing

Towards a Quality Approach to Hierarchical Color Maps

Authors: Tobias Mertz, Jörn Kohlhammer

Tobias Mertz

Topological Separation of Vortices

Authors: Adeel Zafar, Zahra Poorshayegh, Di Yang, Guoning Chen

Adeel Zafar

Animating the Narrative: A Review of Animation Styles in Narrative Visualization

Authors: Vyri Junhan Yang, Mahmood Jasim

Vyri Junhan Yang

LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering

Authors: Harry Li, Gabriel Appleby, Ashley Suh

Harry Li

Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity

Authors: Elmira Zohrevandi, Katerina Vrotsou, Carl A. L. Westin, Jonas Lundberg, Anders Ynnerman

Elmira Zohrevandi

Text-based transfer function design for semantic volume rendering

Authors: Sangwon Jeong, Jixian Li, Shusen Liu, Chris R. Johnson, Matthew Berger

Sangwon Jeong

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Authors: Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, ShengYun Peng, Austin P Wright, Kevin Li, Haekyu Park, Haoyang Yang, Duen Horng (Polo) Chau

Seongmin Lee

Uniform Sample Distribution in Scatterplots via Sector-based Transformation

Authors: Hennes Rave, Vladimir Molchanov, Lars Linsen

Hennes Rave

Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization

Authors: Hannah K. Bako, Arshnoor Bhutani, Xinyi Liu, Kwesi Adu Cobbina, Zhicheng Liu

Hannah K. Bako

Guided Statistical Workflows with Interactive Explanations and Assumption Checking

Authors: Yuqi Zhang, Adam Perer, Will Epperson

Yuqi Zhang

Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts

Authors: Victor S. Bursztyn, Jane Hoffswell, Shunan Guo, Eunyee Koh

Victor S. Bursztyn

Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization

Authors: R. Jordan Crouser, Syrine Matoussi, Lan Kung, Saugat Pandey, Oen G McKinley, Alvitta Ottley

R. Jordan Crouser

Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning

Authors: Pratham Darrpan Mehta, Rahul Ozhur Narayanan, Harsha Karanth, Haoyang Yang, Timothy C Slesnick, Fawwaz Shaw, Duen Horng (Polo) Chau

Pratham Darrpan Mehta

You may want to also jump to the parent event to see related presentations: VIS Short Papers

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

IEEE VIS 2024 Content: VIS Short Papers: Short Papers

VIS Short Papers: Short Papers

Room: To Be Announced


Data Guards: Challenges and Solutions for Fostering Trust in Data

Authors: Nicole Sultanum, Dennis Bromley, Michael Correll

Nicole Sultanum

Intuitive Design of Deep Learning Models through Visual Feedback

Authors: JunYoung Choi, Sohee Park, GaYeon Koh, Youngseo Kim, Won-Ki Jeong

JunYoung Choi

A Comparative Study of Neural Surface Reconstruction for Scientific Visualization

Authors: Siyuan Yao, Weixi Song, Chaoli Wang

Siyuan Yao

Accelerating Transfer Function Update for Distance Map based Volume Rendering

Authors: Michael Rauter, Lukas Zimmermann PhD, Markus Zeilinger PhD

Michael Rauter

FCNR: Fast Compressive Neural Representation of Visualization Images

Authors: Yunfei Lu, Pengfei Gu, Chaoli Wang

Yunfei Lu

On Combined Visual Cluster and Set Analysis

Authors: Nikolaus Piccolotto, Markus Wallinger, Silvia Miksch, Markus Bögl

Nikolaus Piccolotto

ImageSI: Semantic Interaction for Deep Learning Image Projections

Authors: Jiayue Lin, Rebecca Faust, Chris North

Rebecca Faust

A Literature-based Visualization Task Taxonomy for Gantt charts

Authors: Sayef Azad Sakin, Katherine E. Isaacs

Sayef Azad Sakin

Integrating Annotations into the Design Process for Sonifications and Physicalizations

Authors: Rhys Sorenson-Graff, S. Sandra Bae, Jordan Wirfs-Brock

S. Sandra Bae

GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction

Authors: Myeongwon Jung, Takanori Fujiwara, Jaemin Jo

Myeongwon Jung

DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations

Authors: Dennis Bromley, Vidya Setlur

Dennis Bromley

Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness

Authors: Soohyun Lee, Minsuk Chang, Seokhyeon Park, Jinwook Seo

Soohyun Lee

Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content

Authors: Muna Alebri, Noëlle Rakotondravony, Lane Harrison

Muna Alebri

AEye: A Visualization Tool for Image Datasets

Authors: Florian Grötschla, Luca A Lanzendörfer, Marco Calzavara, Roger Wattenhofer

Florian Grötschla

Gridlines Mitigate Sine Illusion in Line Charts

Authors: Clayton J Knittel, Jane Awuah, Steven L Franconeri, Cindy Xiong Bearfield

Cindy Xiong Bearfield

A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling

Authors: Yang Ouyang, Chenyang Zhang, He Wang, Tianle Ma, Chang Jiang, Yuheng Yan, Zuoqin Yan, Xiaojuan Ma, Chuhan Shi, Quan Li

Yang Ouyang

Hypertrix: An indicatrix for high-dimensional visualizations

Authors: Shivam Raval, Fernanda Viegas, Martin Wattenberg

Shivam Raval

Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views

Authors: Mark S Keller, Trevor Manz, Nils Gehlenborg

Mark S Keller

Groot: An Interface for Editing and Configuring Automated Data Insights

Authors: Sneha Gathani, Anamaria Crisan, Vidya Setlur, Arjun Srinivasan

Sneha Gathani

ConFides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration

Authors: Sunwoo Ha, Chaehun Lim, R. Jordan Crouser, Alvitta Ottley

Sunwoo Ha

Connections Beyond Data: Exploring Homophily With Visualizations

Authors: Poorna Talkad Sukumar, Maurizio Porfiri, Oded Nov

Poorna Talkad Sukumar

The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations

Authors: Magdalena Boucher, Christina Stoiber, Mandy Keck, Victor Adriel de Jesus Oliveira, Wolfgang Aigner

Magdalena Boucher

Science in a Blink: Supporting Ensemble Perception in Scalar Fields

Authors: Victor A. Mateevitsi, Michael E. Papka, Khairi Reda

Khairi Reda

AltGeoViz: Facilitating Accessible Geovisualization

Authors: Chu Li, Rock Yuren Pang, Ather Sharif, Arnavi Chheda-Kothary, Jeffrey Heer, Jon E. Froehlich

Chu Li

Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function

Authors: Tomas Rodolfo Daetz Chacon, Michael Böttinger, Gerik Scheuermann, Christian Heine

Tomas Rodolfo Daetz Chacon

Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes

Authors: Patrick Mackey, Jacob Miller, Liz Faultersack

Patrick Mackey

Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization

Authors: Camilla Hrycak, David Lewakis, Jens Harald Krueger

Camilla Hrycak

DaVE - A Curated Database of Visualization Examples

Authors: Jens Koenen, Marvin Petersen, Christoph Garth, Tim Gerrits

Jens Koenen

Feature Clock: High-Dimensional Effects in Two-Dimensional Plots

Authors: Olga Ovcharenko, Rita Sevastjanova, Valentina Boeva

Olga Ovcharenko

Opening the black box of 3D reconstruction error analysis with VECTOR

Authors: Racquel Fygenson, Kazi Jawad, Zongzhan Li, Francois Ayoub, Robert G Deen, Scott Davidoff, Dominik Moritz, Mauricio Hess-Flores

Mauricio Hess-Flores

Visualizations on Smart Watches while Running: It Actually Helps!

Authors: Sarina Kashanj, Xiyao Wang, Charles Perin

Charles Perin

PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis

Authors: Yue Yu, Leixian Shen, Fei Long, Huamin Qu, Hao Chen

Yue Yu

Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality

Authors: Hojung Kwon, Yuanbo Li, Xiaohan Ye, Praccho Muna-McQuay, Liuren Yin, James Tompkin

James Tompkin

An Overview + Detail Layout for Visualizing Compound Graphs

Authors: Chang Han, Justin Lieffers, Clayton Morrison, Katherine E. Isaacs

Chang Han

Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking

Authors: Fairouz Grioui, Tanja Blascheck, Lijie Yao, Petra Isenberg

Fairouz Grioui

Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities

Authors: Matthias Maiterth, Wes Brewer, Dane De Wet, Scott Greenwood, Vineet Kumar, Jesse Hines, Sedrick L Bouknight, Zhe Wang, Tim Dykes, Feiyi Wang

Matthias Maiterth

Curve Segment Neighborhood-based Vector Field Exploration

Authors: Nguyen K Phan, Guoning Chen

Nguyen K Phan

Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations

Authors: Venkatesh Sivaraman, Frank Elavsky, Dominik Moritz, Adam Perer

Venkatesh Sivaraman

Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations

Authors: Kiroong Choe, Eunhye Kim, Sangwon Park, Jinwook Seo

Kiroong Choe

Can GPT-4V Detect Misleading Visualizations?

Authors: Jason Huang Alexander, Priyal H Nanda, Kai-Cheng Yang, Ali Sarvghad

Ali Sarvghad

A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts

Authors: Anne Gossing, Andreas Beckert, Christoph Fischer, Nicolas Klenert, Vijay Natarajan, George Pacey, Thorwin Vogt, Marc Rautenhaus, Daniel Baum

Anne Gossing

Towards a Quality Approach to Hierarchical Color Maps

Authors: Tobias Mertz, Jörn Kohlhammer

Tobias Mertz

Topological Separation of Vortices

Authors: Adeel Zafar, Zahra Poorshayegh, Di Yang, Guoning Chen

Adeel Zafar

Animating the Narrative: A Review of Animation Styles in Narrative Visualization

Authors: Vyri Junhan Yang, Mahmood Jasim

Vyri Junhan Yang

LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering

Authors: Harry Li, Gabriel Appleby, Ashley Suh

Harry Li

Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity

Authors: Elmira Zohrevandi, Katerina Vrotsou, Carl A. L. Westin, Jonas Lundberg, Anders Ynnerman

Elmira Zohrevandi

Text-based transfer function design for semantic volume rendering

Authors: Sangwon Jeong, Jixian Li, Shusen Liu, Chris R. Johnson, Matthew Berger

Sangwon Jeong

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Authors: Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, ShengYun Peng, Austin P Wright, Kevin Li, Haekyu Park, Haoyang Yang, Duen Horng (Polo) Chau

Seongmin Lee

Uniform Sample Distribution in Scatterplots via Sector-based Transformation

Authors: Hennes Rave, Vladimir Molchanov, Lars Linsen

Hennes Rave

Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization

Authors: Hannah K. Bako, Arshnoor Bhutani, Xinyi Liu, Kwesi Adu Cobbina, Zhicheng Liu

Hannah K. Bako

Guided Statistical Workflows with Interactive Explanations and Assumption Checking

Authors: Yuqi Zhang, Adam Perer, Will Epperson

Yuqi Zhang

Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts

Authors: Victor S. Bursztyn, Jane Hoffswell, Shunan Guo, Eunyee Koh

Victor S. Bursztyn

Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization

Authors: R. Jordan Crouser, Syrine Matoussi, Lan Kung, Saugat Pandey, Oen G McKinley, Alvitta Ottley

R. Jordan Crouser

Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning

Authors: Pratham Darrpan Mehta, Rahul Ozhur Narayanan, Harsha Karanth, Haoyang Yang, Timothy C Slesnick, Fawwaz Shaw, Duen Horng (Polo) Chau

Pratham Darrpan Mehta

You may want to also jump to the parent event to see related presentations: VIS Short Papers

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

IEEE VIS 2024 Content: TVCG Invited Presentations: TVCG

TVCG Invited Presentations: TVCG

Room: To Be Announced


Inclusion Depth for Contour Ensembles

Authors:

Nicolás Cháves

Accelerating hyperbolic t-SNE

Authors:

Martin Skrodzki

TTK is Getting MPI-Ready

Authors:

Julien Tierny

You may want to also jump to the parent event to see related presentations: TVCG Invited Presentations

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

IEEE VIS 2024 Content: TVCG Invited Presentations: TVCG

TVCG Invited Presentations: TVCG

Room: To Be Announced


This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

Authors: Sungwon In, Tica Lin, Chris North, Hanspeter Pfister, Yalong Yang

Sungwon In

On Network Structural and Temporal Encodings: A Space and Time Odyssey

Authors: Velitchko Filipov, Alessio Arleo, Markus Bögl, Silvia Miksch

Velitchko Filipov

Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Authors: Jiayi Hong, Ross Maciejewski, Alain Trubuil, Tobias Isenberg

Jiayi Hong

Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Authors: Kelvin L. T. Fung, Simon T. Perrault, Michael T. Gastner

Michael Gastner

What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

Authors: Cindy Xiong Bearfield, Chase Stokes, Andrew Lovett, Steven Franconeri

Cindy Xiong Bearfield

AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

Authors: Songheng Zhang, Yong Wang, Haotian Li, Huamin Qu

Songheng Zhang

GeoLinter: A Linting Framework for Choropleth Maps

Authors: Fan Lei, Arlen Fan, Alan M. MacEachren, Ross Maciejewski

Arlen Fan

What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

Authors: Leilani Battle, Alvitta Ottley

Leilani Battle

Wasserstein Dictionaries of Persistence Diagrams

Authors: Keanu Sisouk, Julie Delon, Julien Tierny

Julien Tierny

Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Authors: Saeed Boorboor, Yoonsang Kim, Ping Hu, Josef Moses, Brian Colle, Arie E. Kaufman

Saeed Boorboor

Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Authors: Anamaria Crisan, Maddie Shang, Eric Brochu

Anamaria Crisan

The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

Authors: Chase Stokes, Cindy Xiong Bearfield, Marti Hearst

Chase Stokes

InVADo: Interactive Visual Analysis of Molecular Docking Data

Authors: Marco Schäfer, Nicolas Brich, Jan Byška, Sérgio M. Marques, David Bednář, Philipp Thiel, Barbora Kozlíková, Michael Krone

Michael Krone

QuantumEyes: Towards Better Interpretability of Quantum Circuits

Authors: Shaolun Ruan, Qiang Guan, Paul Griffin, Ying Mao, Yong Wang

Shaolun Ruan

Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Authors: Mathieu Pont, Julien Tierny

Julien Tierny

VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

Authors: Saeed Boorboor, Matthew S. Castellana, Yoonsang Kim, Zhutian Chen, Johanna Beyer, Hanspeter Pfister, Arie E. Kaufman

Saeed Boorboor

Interactive Reweighting for Mitigating Label Quality Issues

Authors: Weikai Yang, Yukai Guo, Jing Wu, Zheng Wang, Lan-Zhe Guo, Yu-Feng Li, Shixia Liu

Weikai Yang

Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Authors: Gabriela Molina León, Petra Isenberg, Andreas Breiter

Gabriela Molina León

Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Authors: Adam Coscia, Ashley Suh, Remco Chang, Alex Endert

Adam Coscia

Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Authors: Lijie Yao, Romain Vuillemot, Anastasia Bezerianos, Petra Isenberg

Lijie Yao

Inclusion Depth for Contour Ensembles

Authors: Nicolas F. Chaves-de-Plaza, Prerak Mody, Marius Staring, René van Egmond, Anna Vilanova, Klaus Hildebrandt

Nicolás Cháves

Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Authors: Connor Scully-Allison, Ian Lumsden, Katy Williams, Jesse Bartels, Michela Taufer, Stephanie Brink, Abhinav Bhatele, Olga Pearce, Katherine E. Isaacs

Connor Scully-Allison

A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

Authors: Seokweon Jung, DongHwa Shin, Hyeon Jeon, Kiroong Choe, Jinwook Seo

Seokweon Jung

A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

Authors: Junxiu Tang, Fumeng Yang, Jiang Wu, Yifang Wang, Jiayi Zhou, Xiwen Cai, Lingyun Yu, Yingcai Wu

Junxiu Tang

Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Authors: Tim Krake, Daniel Klötzl, David Hägele, Daniel Weiskopf

Tim Krake

Accelerating hyperbolic t-SNE

Authors: Martin Skrodzki, Hunter van Geffen, Nicolas F. Chaves-de-Plaza, Thomas Höllt, Elmar Eisemann, Klaus Hildebrandt

Martin Skrodzki

A Survey on Progressive Visualization

Authors: Alex Ulmer, Marco Angelini, Jean-Daniel Fekete, Jörn Kohlhammerm, Thorsten May

Alex Ulmer

Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Authors: Brianna L. Wimer, Laura South, Keke Wu, Danielle Albers Szafir, Michelle A. Borkin, Ronald A. Metoyer

Brianna Wimer

SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

Authors: Xiaoqi Wang, Kevin Yen, Yifan Hu, Han-Wei Shen

Xiaoqi Wang

Decoupling Judgment and Decision Making: A Tale of Two Tails

Authors: Başak Oral, Pierre Dragicevic, Alexandru Telea, Evanthia Dimara

Başak Oral

Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Authors: Helia Hosseinpour, Laura E. Matzen, Kristin M. Divis, Spencer C. Castro, Lace Padilla

Helia Hosseinpour

De-cluttering Scatterplots with Integral Images

Authors: Hennes Rave, Vladimir Molchanov, Lars Linsen

Hennes Rave

Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Authors: Huang, Xuan, Miao, Haichao, Kim, Hyojin, Townsend, Andrew, Champley, Kyle, Tringe, Joseph, Pascucci, Valerio, Bremer, Peer-Timo

Xuan Huang

Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Authors: Luca Podo, Bardh Prenkaj, Paola Velardi

Luca Podo

Visual Analysis of Time-Stamped Event Sequences

Authors: Jürgen Bernard, Clara-Maria Barth, Eduard Cuba, Andrea Meier, Yasara Peiris, Ben Shneiderman

Jürgen Bernard

Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Authors: Emilia Ståhlbom, Jesper Molin, Claes Lundström, Anders Ynnerman

Emilia Ståhlbom

TTK is Getting MPI-Ready

Authors: E. Le Guillou, M. Will, P. Guillou, J. Lukasczyk, P. Fortin, C. Garth, J. Tierny

Julien Tierny

ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

Authors: Yuan Tian, Weiwei Cui, Dazhen Deng, Xinjing Yi, Yurun Yang, Haidong Zhang, Yingcai Wu

Yuan Tian

Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Authors: Qing Chen, Ying Chen, Ruishi Zou, Wei Shuai, Yi Guo, Jiazhe Wang, Nan Cao

Qing Chen

MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

Authors: Yutian Zhang, Guohong Zheng, Zhiyuan Liu, Quan Li, Haipeng Zeng

Haipeng Zeng

Active Gaze Labeling: Visualization for Trust Building

Authors: Maurice Koch, Nan Cao, Daniel Weiskopf, Kuno Kurzhals

Maurice Koch

Interpreting High-Dimensional Projections With Capacity

Authors: Yang Zhang, Jisheng Liu, Chufan Lai, Yuan Zhou, Siming Chen

Siming Chen

FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

Authors: Longfei Chen, Chen Cheng, He Wang, Xiyuan Wang, Yun Tian, Xuanwu Yue, Wong Kam-Kwai, Haipeng Zhang, Suting Hong, Quan Li

Longfei Chen

Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Authors: Christophe Hurter, Bernice Rogowitz, Guillaume Truong, Tiffany Andry, Hugo Romat, Ludovic Gardy, Fereshteh Amini, Nathalie Henry Riche

Christophe Hurter

VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

Authors: Wai Tong, Kento Shigyo, Lin-Ping Yuan, Mingming Fan, Ting-Chuen Pong, Huamin Qu, Meng Xia

Wai Tong

KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

Authors: He Wang, Yang Ouyang, Yuchen Wu, Chang Jiang, Lixia Jin, Yuanwu Cao, Quan Li

He Wang

“Nanomatrix: Scalable Construction of Crowded Biological Environments”

Authors: Ruwayda Alharbi, Ondˇrej Strnad, Tobias Klein, Ivan Viola

Ruwayda Alharbi

PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

Authors: Yuhan Guo, Hanning Shao, Can Liu, Kai Xu, Xiaoru Yuan

Yuhan Guo

Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Authors: Shaghayegh Esmaeili, Samia Kabir, Anthony M. Colas, Rhema P. Linder, Eric D. Ragan

Shaghayegh Esmaeili

A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

Authors: Ole Wegen, Willy Scheibel, Matthias Trapp, Rico Richter, Jürgen Döllner

Ole Wegen

Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Authors: Kiroong Choe, Chaerin Lee, Soohyun Lee, Jiwon Song, Aeri Cho, Nam Wook Kim, Jinwook Seo

Kiroong Choe

Reviving Static Charts into Live Charts

Authors: Velitchko Filipov, Alessio Arleo, Markus Bögl, Silvia Miksch

Lu Ying

Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Authors: Gabriel D. Cantareira, Yiwen Xing, Nicholas Cole, Rita Borgo, Alfie Abdul-Rahman

Alfie Abdul-Rahman

WonderFlow: Narration-Centric Design of Animated Data Videos

Authors: Yun Wang, Leixian Shen, Zhengxin You, Xinhuan Shu, Bongshin Lee, John Thompson, Haidong Zhang, Dongmei Zhang

Leixian Shen

SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

Authors: Juntong Chen, Qiaoyun Huang, Changbo Wang, Chenhui Li

Juntong Chen

Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Authors: Yifan Cao, Qing Shi, Lucas Shen, Kani Chen, Yang Wang, Wei Zeng, Huamin Qu

Yifan Cao

LEVA: Using Large Language Models to Enhance Visual Analytics

Authors: Yuheng Zhao, Yixing Zhang, Yu Zhang, Xinyi Zhao, Junjie Wang, Zekai Shao, Cagatay Turkay, Siming Chen

Yuheng Zhao

V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

Authors: Jung Who Nam, Tobias Isenberg, Daniel F. Keefe

Jung Who Nam

How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

Authors: Qing Chen, Shixiong Cao, Jiazhe Wang, Nan Cao

Qing Chen

You may want to also jump to the parent event to see related presentations: TVCG Invited Presentations

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

IEEE VIS 2024 Content: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV

Room: To Be Announced


How Many Evaluations are Enough? A Position Paper on Evaluation Trend in Information Visualization

Authors: Feng Lin, Arran Zeyu Wang, Md Dilshadur Rahman, Danielle Albers Szafir, Ghulam Jilani Quadri

Ghulam Jilani Quadri

Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts

Authors: Seyda Öney, Moataz Abdelaal, Kuno Kurzhals, Paul Betz, Cordula Kropp, Daniel Weiskopf

Seyda Öney

Design-Specific Transforms In Visualization

Authors: eugene Wu, Remco Chang

eugene Wu

Normalized Stress is Not Normalized: How to Interpret Stress Correctly

Authors: Kiran Smelser, Jacob Miller, Stephen Kobourov

Jacob Miller

The Role of Metacognition in Understanding Deceptive Bar Charts

Authors: Antonia Schlieder, Jan Rummel, Peter Albers, Filip Sadlo

Antonia Schlieder

Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design

Authors: Abhraneel Sarma, Sheng Long, Michael Correll, Matthew Kay

Abhraneel Sarma

Visualising Lived Experience: Learning from a Master and Alternative Narrative Framing

Authors: Mai Elshehaly, Mirela Reljan-Delaney, Jason Dykes, Aidan Slingsby, Jo Wood, Sam Spiegel

Mai Elshehaly

Merits and Limits of Preregistration for Visualization Research

Authors: Lonni Besançon, Brian Nosek, Tamarinde Haven, Miriah Meyer, Cody Dunne, Mohammad Ghoniem

Lonni Besançon

Visualization Artifacts are Boundary Objects

Authors: Jasmine Tan Otto, Scott Davidoff

Jasmine Tan Otto

We Don't Know How to Assess LLM Contributions in VIS/HCI

Authors: Anamaria Crisan

Anamaria Crisan

Complexity as Design Material

Authors: Florian Windhager, Alfie Abdul-Rahman, Mark-Jan Bludau, Nicole Hengesbach, Houda Lamqaddam, Isabel Meirelles, Bettina Speckmann, Michael Correll

Florian Windhager

You may want to also jump to the parent event to see related presentations: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-eduvis0.html b/program/session_w-eduvis0.html new file mode 100644 index 000000000..c19c2288d --- /dev/null +++ b/program/session_w-eduvis0.html @@ -0,0 +1,187 @@ + IEEE VIS 2024 Content: EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis

EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis

Room: To Be Announced


Challenges and Opportunities of Teaching Data Visualization Together with Data Science

Authors: Shri Harini Ramesh, Fateme Rajabiyazdi

Shri Harini Ramesh

Implementing the Solution Framework in a Social Impact Project

Authors: Victor Muñoz, Kevin Ford

Victor Muñoz

AdVizor: Using Visual Explanations to Guide Data-Driven Student Advising

Authors: Riley Weagant, Zixin Zhao, Adam Badley, Christopher Collins

Zixin Zhao

Teaching Information Visualization through Situated Design: Case Studies from the Classroom

Authors: Doris Kosminsky, Renata Perim Lopes, Regina Reznik

Doris Kosminsky

Developing a Robust Cartography Curriculum to Train the Professional Cartographer

Authors: Jonathan Nelson, P. William Limpisathian, Robert Roth

Jonathan Nelson

What makes school visits to digital science centers successful?

Authors: Andreas Göransson, Konrad J Schönborn

Andreas Göransson

An Inductive Approach for Identification of Barriers to PCP Literacy

Authors: Chandana Srinivas, Elif E. Firat, Robert S. Laramee, Alark Joshi

Alark Joshi

Space to Teach: Content-Rich Canvases for Visually-Intensive Education

Authors: Jesse Harden, Nurit Kirshenbaum, Roderick S Tabalba Jr., Ryan Theriot, Michael L. Rogers, Mahdi Belcaid, Chris North, Luc Renambot, Lance Long, Andrew E Johnson, Jason Leigh

Jesse Harden

Engaging Data-Art: Conducting a Public Hands-On Workshop

Authors: Jonathan C Roberts

Jonathan C Roberts

TellUs – Leveraging the power of LLMs with visualization to benefit science centers.

Authors: Lonni Besançon, Mathis Brossier, Omar Mena, Erik Sundén, Andreas Göransson, Anders Ynnerman, Konrad J Schönborn

Lonni Besançon

What Can Educational Science Offer Visualization? A Reflective Essay

Authors: Konrad J Schönborn, Lonni Besançon

Lonni Besançon

You may want to also jump to the parent event to see related presentations: EduVis: Workshop on Visualization Education, Literacy, and Activities

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-energyvis0.html b/program/session_w-energyvis0.html new file mode 100644 index 000000000..8a924213c --- /dev/null +++ b/program/session_w-energyvis0.html @@ -0,0 +1,187 @@ + IEEE VIS 2024 Content: EnergyVis 2024: 4th Workshop on Energy Data Visualization: EnergyVis

EnergyVis 2024: 4th Workshop on Energy Data Visualization: EnergyVis

Room: To Be Announced


Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri

Authors: Baldwin Nsonga, Andy S Berres, Robert Jeffers, Caitlyn Clark, Hans Hagen, Gerik Scheuermann

Baldwin Nsonga

Architecture for Web-Based Visualization of Large-Scale Energy Domains

Authors: Graham Johnson, Sam Molnar, Nicholas Brunhart-Lupo, Kenny Gruchalla

Kenny Gruchalla

Pathways Explorer: Interactive Visualization of Climate Transition Scenarios

Authors: François Lévesque, Louis Beaumier, Thomas Hurtut

Thomas Hurtut

Challenges in Data Integration, Monitoring, and Exploration of Methane Emissions: The Role of Data Analysis and Visualization

Authors: Parisa Masnadi Khiabani, Gopichandh Danala, Wolfgang Jentner, David Ebert

Parisa Masnadi Khiabani

Operator-Centered Design of a Nodal Loadability Network Visualization

Authors: David Marino, Maxwell Keleher, Krzysztof Chmielowiec, Antony Hilliard, Pawel Dawidowski

David Marino

Situated Visualization of Photovoltaic Module Performance for Workforce Development

Authors: Nicholas Brunhart-Lupo, Kenny Gruchalla, Laurie Williams, Steve Ellis

Nicholas Brunhart-Lupo

CPIE: A Spatiotemporal Visual Analytic Tool to Explore the Impact of Coal Pollution

Authors: Sichen Jin, Lucas Henneman, Jessica Roberts

Sichen Jin

ChatGrid: Power Grid Visualization Empowered by a Large Language Model

Authors: Sichen Jin, Shrirang Abhyankar

Sichen Jin

You may want to also jump to the parent event to see related presentations: EnergyVis 2024: 4th Workshop on Energy Data Visualization

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-future0.html b/program/session_w-future0.html new file mode 100644 index 000000000..97f7438d2 --- /dev/null +++ b/program/session_w-future0.html @@ -0,0 +1,187 @@ + IEEE VIS 2024 Content: VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation: VISions of the Future

VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation: VISions of the Future

Room: To Be Announced


Rain Gauge: Exploring the Design and Sustainability of 3D Printed Clay Physicalizations

Authors: Bridger Herman, Jessica Rossi-Mastracci, Heather Willy, Molly Reichert, Daniel F. Keefe

Bridger Herman

(Almost) All Data is Absent Data

Authors: Karly Ross, Pratim Sengupta, Wesley Willett

Karly Ross

Renewable Energy Data Visualization: A study with Open Data

Authors: Gustavo Santos Silva, Artur Vinícius Lima Silva, Lucas Pereira Souza, Adrian Lauzid, Davi Maia

Gustavo Santos Silva

Visual and Data Journalism as Tools for Fighting Climate Change

Authors: Emilly Brito, Nivan Ferreira

Emilly Brito

You may want to also jump to the parent event to see related presentations: VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-nlviz0.html b/program/session_w-nlviz0.html new file mode 100644 index 000000000..65642e641 --- /dev/null +++ b/program/session_w-nlviz0.html @@ -0,0 +1,187 @@ + IEEE VIS 2024 Content: NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization: MLVIZ

NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization: MLVIZ

Room: To Be Announced


Steering LLM Summarization with Visual Workspaces for Sensemaking

Authors: Xuxin Tang, Eric Krokos, Kirsten Whitley, Can Liu, Naren Ramakrishnan, Chris North

Xuxin Tang

Towards Real-Time Speech Segmentation for Glanceable Conversation Visualization

Authors: Shanna Li Ching Hollingworth, Wesley Willett

Shanna Li Ching Hollingworth

vitaLITy 2: Reviewing Academic Literature Using Large Language Models

Authors: Hongye An, Arpit Narechania, Kai Xu

Arpit Narechania

“Show Me What’s Wrong!”: Combining Charts and Text to Guide Data Analysis

Authors: Beatriz Feliciano, Rita Costa, Jean Alves, Javier Liébana, Diogo Ramalho Duarte, Pedro Bizarro

Beatriz Feliciano

Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings

Authors: Wei Liu, Chris North, Rebecca Faust

Rebecca Faust

Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models

Authors: Subham Sah, Rishab Mitra, Arpit Narechania, Alex Endert, John Stasko, Wenwen Dou

Subham Sah

Towards Inline Natural Language Authoring for Word-Scale Visualizations

Authors: Paige So'Brien, Wesley Willett

Paige So'Brien

iToT: An Interactive System for Customized Tree-of-Thought Generation

Authors: Alan David Boyle, Isha Gupta, Sebastian Hönig, Lukas Mautner, Kenza Amara, Furui Cheng, Mennatallah El-Assady

Isha Gupta

Strategic management analysis: from data to strategy diagram by LLM

Authors: Richard Brath, Adam James Bradley, David Jonker

Richard Brath

A Preliminary Roadmap for LLMs as Visual Data Analysis Assistants

Authors: Harry Li, Gabriel Appleby, Ashley Suh

Harry Li

Enhancing Arabic Poetic Structure Analysis through Visualization

Authors: Abdelmalek Berkani, Adrian Holzer

Abdelmalek Berkani

You may want to also jump to the parent event to see related presentations: NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-storygenai0.html b/program/session_w-storygenai0.html new file mode 100644 index 000000000..6e0d66e9d --- /dev/null +++ b/program/session_w-storygenai0.html @@ -0,0 +1,187 @@ + IEEE VIS 2024 Content: Workshop on Data Storytelling in an Era of Generative AI: Data Story GenAI

Workshop on Data Storytelling in an Era of Generative AI: Data Story GenAI

Room: To Be Announced


The Data-Wink Ratio: Emoji Encoder for Generating Semantically-Resonant Unit Charts

Authors: Matthew Brehmer, Vidya Setlur, Zoe Zoe, Michael Correll

Matthew Brehmer

Constraint representation towards precise data-driven storytelling

Authors: Yu-Zhe Shi, Haotian Li, Lecheng Ruan, Huamin Qu

Yu-Zhe Shi

From Data to Story: Towards Automatic Animated Data Video Creation with LLM-based Multi-Agent Systems

Authors: Leixian Shen, Haotian Li, Yun Wang, Huamin Qu

Leixian Shen

Show and Tell: Exploring Large Language Model’s Potential in Formative Educational Assessment of Data Stories

Authors: Naren Sivakumar, Lujie Karen Chen, Pravalika Papasani, Vigna Majmundar, Jinjuan Heidi Feng, Louise Yarnall, Jiaqi Gong

Naren Sivakumar

You may want to also jump to the parent event to see related presentations: Workshop on Data Storytelling in an Era of Generative AI

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-topoinvis0.html b/program/session_w-topoinvis0.html new file mode 100644 index 000000000..fa790df91 --- /dev/null +++ b/program/session_w-topoinvis0.html @@ -0,0 +1,187 @@ + IEEE VIS 2024 Content: TopoInVis: Workshop on Topological Data Analysis and Visualization: TopoInVis

TopoInVis: Workshop on Topological Data Analysis and Visualization: TopoInVis

Room: To Be Announced


Critical Point Extraction from Multivariate Functional Approximation

Authors: Guanqun Ma, David Lenz, Tom Peterka, Hanqi Guo, Bei Wang

Guanqun Ma

Asymptotic Topology of 3D Linear Symmetric Tensor Fields

Authors: Xinwei Lin, Yue Zhang, Eugene Zhang

Eugene Zhang

Topological Simplifcation of Jacobi Sets for Piecewise-Linear Bivariate 2D Scalar Fields

Authors: Felix Raith, Gerik Scheuermann, Christian Heine

Felix Raith

Revisiting Accurate Geometry for the Morse-Smale Complexes

Authors: Son Le Thanh, Michael Ankele, Tino Weinkauf

Son Le Thanh

Multi-scale Cycle Tracking in Dynamic Planar Graphs

Authors: Farhan Rasheed, Abrar Naseer, Emma Nilsson, Talha Bin Masood, Ingrid Hotz

Farhan Rasheed

Efficient representation and analysis for a large tetrahedral mesh using Apache Spark

Authors: Yuehui Qian, Guoxi Liu, Federico Iuricich, Leila De Floriani

Yuehui Qian

You may want to also jump to the parent event to see related presentations: TopoInVis: Workshop on Topological Data Analysis and Visualization

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-uncertainty0.html b/program/session_w-uncertainty0.html new file mode 100644 index 000000000..7912ecdb0 --- /dev/null +++ b/program/session_w-uncertainty0.html @@ -0,0 +1,187 @@ + IEEE VIS 2024 Content: Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks: Uncertainty Visualization

Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks: Uncertainty Visualization

Room: To Be Announced


Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty

Authors: Chase Stokes, Chelsea Sanker, Bridget Cogley, Vidya Setlur

Chase Stokes

Uncertainty-Informed Volume Visualization using Implicit Neural Representation

Authors: Shanu Saklani, Chitwan Goel, Shrey Bansal, Zhe Wang, Soumya Dutta, Tushar M. Athawale, David Pugmire, Chris R. Johnson

Soumya Dutta

UADAPy: An Uncertainty-Aware Visualization and Analysis Toolbox

Authors: Patrick Paetzold, David Hägele, Marina Evers, Daniel Weiskopf, Oliver Deussen

Patrick Paetzold

FunM^2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices

Authors: Gautam Hari, Nrushad A Joshi, Zhe Wang, Qian Gong, David Pugmire, Kenneth Moreland, Chris R. Johnson, Scott Klasky, Norbert Podhorszki, Tushar M. Athawale

Gautam Hari

Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Field

Authors: Timbwaoga A. J. Ouermi, Jixian Li, Zachary Morrow, Bart van Bloemen Waanders, Chris R. Johnson

Timbwaoga A. J. Ouermi

Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods

Authors: Timbwaoga A. J. Ouermi, Jixian Li, Tushar M. Athawale, Chris R. Johnson

Timbwaoga A. J. Ouermi

Accelerated Depth Computation for Surface Boxplots with Deep Learning

Authors: Mengjiao Han, Tushar M. Athawale, Jixian Li, Chris R. Johnson

Mengjiao Han

Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations

Authors: Jixian Li, Timbwaoga A. J. Ouermi, Chris R. Johnson

Jixian Li

Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models

Authors: Sam Molnar, J.D. Laurence-Chasen, Yuhan Duan, Julie Bessac, Kristi Potter

Sam Molnar

Effects of Forecast Number, Order, and Cost in Multiple Forecast Visualizations

Authors: Laura Matzen, Mallory C Stites, Kristin M Divis, Alexander Bendeck, John Stasko, Lace M. Padilla

Laura Matzen

An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations

Authors: Robert Sisneros, Tushar M. Athawale, Kenneth Moreland, David Pugmire

Robert Sisneros

You may want to also jump to the parent event to see related presentations: Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-vis4climate0.html b/program/session_w-vis4climate0.html new file mode 100644 index 000000000..eb507fe22 --- /dev/null +++ b/program/session_w-vis4climate0.html @@ -0,0 +1,187 @@ + IEEE VIS 2024 Content: Visualization for Climate Action and Sustainability: Vis4Climate

Visualization for Climate Action and Sustainability: Vis4Climate

Room: To Be Announced


TEST - Le papier

Authors: Fanny Chevalier

Fanny Chevalier

EcoViz: an iterative methodology for designing multifaceted data-driven environmental visualizations that communicate ecosystem impacts and envision nature-based solutions

Authors: Jessica Marielle Kendall-Bar, Isaac Nealey, Ian Costello, Christopher Lowrie, Kevin Huynh Nguyen, Paul J. Ponganis, Michael W. Beck, İlkay Altıntaş

Jessica Marielle Kendall-Bar

Eco-Garden: A Data Sculpture to Encourage Sustainable Practices in Everyday Life in Households

Authors: Dushani Ushettige, Nervo Verdezoto, Simon Lannon, Jullie Gwilliam, Parisa Eslambolchilar

Dushani Ushettige

You may want to also jump to the parent event to see related presentations: Visualization for Climate Action and Sustainability

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file